Monday, October 29, 2007

Server Cabinet Organization Tips

Just in time for Halloween, check out this classic server room cabling nightmare at Tech Republic. Scary stuff.

Good data center design is a combination of high-level conceptual thinking and strategic planning, plus close attention to detail. Obviously, things like the cooling system and support infrastructure are critical to maintaining an always-available data center, but smaller things like well organized server cabinets can also contribute to the overall efficiency of a data center or computer room. That being said, I thought I’d share a few of our guidelines and best practices for organizing your cabinets.

In no particular order:

1. Place heavier equipment on the bottom, lighter equipment towards the top

2. Use blanking plates to fill equipment gaps to prevent hot air from re-circulating back to the front

3. Use a cabinet deep enough to accommodate cable organization and airflow in the rear of the cabinet

4. Use perforated front and rear doors when using the room for air distribution

5. Make sure doors can be locked for security

6. PTS prefers using a patch panel in each cabinet for data distribution. We typically install it in the top rear U’s, but are experimenting with vertical rear channel patch cable distribution

7. PTS prefers using vertical power strips in a rear channel of the cabinet with short power cords for server-to-power-strip distribution

8. While they are convenient, do not use cable management arms that fold the cables on the back of the server as they impede outlet airflow of the server

9. Don’t use roof fans without front-to-rear baffling. They suck as much cold air from the front as they do hot air from the rear.

10. Monitor air inlet temperature ¾ of the way up the front of the cabinet

11. Use U-numbered vertical rails to make mounting equipment easier

12. Have a cabinet numbering convention and floor layout map

13. Use color-coded cabling for different services

14. Separate power and network cabling distribution on opposite sides of the cabinet

15. PTS often uses the tops of the cabinet to facilitate cabinet-to-cabinet power and data cable distribution

As you can see, the little things do make a difference. And by instituting some or all of these, you’ll be one step closer to 24-7 availability.

Wednesday, October 24, 2007

The Role of Sprinklers in Computer Room Fire Protection

A number of clients have asked us about the viability of replacing their ‘wet’ sprinkler systems with a dry-type fire suppression system, such as FM-200. Not many IT personnel understand the role of water-based fire suppression systems, but all realize the potential for water in the data processing environment to be a “bad thing.”
 
The short answer is that sprinkler systems protect the building and dry-type systems protect the equipment. In most cases a dry-type system cannot take the place of a sprinkler system, it can only be installed in addition to it. At the end of the day, the local fire inspection is the authority and has jurisdiction over what is permissible. This is the reason that pre-action sprinkler systems are primarily used for computer room fire protection.
 
That being said, fire prevention provides more protection against damage than any type of detection or suppression equipment available. For Tier I and Tier II computer rooms, PTS often recommends installing only a pre-action sprinkler system activated by a photo-electric smoke detection system and forgo a dry-type system and VESDA system. We find the most effective strategy is to emphasize prevention and early detection. This allows the client to maximize availability by investing in solutions for areas of higher risk, such as fully redundant power and cooling systems.
 
For more information on fire protection, read our vendor white paper “Mitigating Fire Risks in Mission Critical Facilities,” which provides a clear understanding of the creation, detection, suppression and prevention of fire within mission critical facilities. Fire codes for Information Technology environments are discussed. Best practices for increasing availability are provided.

Friday, September 28, 2007

Blogging in Good Company…

Happy Friday, everyone!

A little over a year ago, Rich Miller at Data Center Knowledge put together a list of data center-related blogs, some of which have now become part of my regular reading habit. (Thanks, Rich, for including the PTS blog in that list.)

Expanding on Data Center Knowledge’s list, here are a few blogs that I try to keep tabs on:
* Cisco’s Data Center Networks
* Virtual Graffiti’s APCGuard
* John Rath’s Data Center Information
* SearchDataCenter.com’s Server Specs
* Matt Stansberry’s SearchDataCenter.com Editorial Blog
* CEO Jonathan Schwartz’s Sun Microsystems Blog
* DD’s Eco Notes (another Sun blog)
* The Mainframe Blog
* The Next Generation Data Center Blog
* Various ITToolbox Blogs

Check them out when you have time.

By the way, the PTS Data Center Design blog has joined the MyBlogLog community. It’s a great tool for connecting with readers and authors of sites you enjoy. If you’re a MyBlogLog member, leave a message for me – it’s always great to hear from readers!

Tuesday, September 25, 2007

It Isn’t Easy Being Green: Companies forgo eco-friendly solutions

A number of major corporations in the past year, including News Corp and Citigroup, announced plans to launch significant environmental initiatives. These corporations are paying particular attention to sustainability and are taking steps to build green data centers, in addition to reducing their carbon footprint in other ways.

To meet this demand, industry leaders such as Sun Microsystems, HP and IBM added energy-efficient servers and other eco-friendly technology solutions to their offerings. However, according to Going Green: Vendors Deliver Solutions to Save Money – the World:

“[E]nd users won’t rush to replace their infrastructure with greener technology, says Blair Pleasant, president and principal analyst of research firm Commfusion LLC. For one thing, there are budgets to consider. Pleasant likens the principle to the car industry — many consumers might want to drive expensive hybrids but aren’t ready to replace their perfectly serviceable, gas-powered vehicles.

Plus, there’s some skepticism that environmentally friendly systems might not work as well as familiar, existing networks. Companies “are going to have to prove that the new technologies or systems are every bit as good as what [end users] already have,” Pleasant says.”


Despite the eco-friendly peer pressure, many companies have and will continue to forgo potential long-term savings over increased capital expenditures until the premium to do so diminishes. It will be interesting to see how this plays out as the green movement continues to build steam and as the media continues to barrage us with global warming news. If hybrid vehicles really start to take off, will green data centers too?

Friday, August 24, 2007

Plan Your Data Center Move (Part 2 of 2)

A successful data center relocation starts with a good plan. By placing emphasis on pre-design and planning, you can achieve an optimal solution to meet the demands of your data center move. Here are some key points to address when developing your own data center relocation strategy:

What equipment really needs to move?

An equipment migration is the perfect time to make network and network security improvements, phase out old server and storage platforms, and undertake a virtualization project to minimize the number of servers.

Is the new site’s support infrastructure prepared to accept the new load?

Is there enough UPS, cooling, power distribution, floor weight capacity, etc.? Is the data cabling strategy the same or will you be making changes? It’s helpful to retain a computer room design consultant to verify the load capacity and redundancy constraints of the new site. If working with a pre-existing space, the new computer room should be re-commissioned.

Establish corporate buy-in.

Clearly communicate the timeline of the project with everyone in the company – management and employees alike.

Identify, mark, tag, and document everything – twice!

Every piece of equipment from subfloor to ceiling – be it a cabinet, rack, power cable, power strip, patch cable, data cable, bracket, nut, or bolt – needs to be accounted for using a numbering convention that will ensure everything goes back together exactly as it came apart.

Develop a schedule with enough time built in for contingencies.

Allow yourself a sufficient margin of error in case there’s a hold-up at some point during the process. Build extra time in at the end of the data center relocation schedule and don’t try to do too much at one time.


For more advice on data center migration, check out "Tips For Moving Your Data Center" at Processor.com.

Monday, August 20, 2007

Plan Your Data Center Move (Part 1 of 2)

In my post “Tips for Handling Your Data Center Relocation,” I discussed some basic strategies for streamlining a data center move. Since then, I’ve received a few requests for more insight into handling the data center relocation process. In this post I’ll address whether it’s necessary to call in the pros and how to pick a data center moving company.

While in some cases the in-house team can handle the move themselves, most enterprises need a little extra help. I liken it to attempting a plumbing project on your own. The tools you need to do the job most effectively are so specialized and you rarely have them – in most situations, it would take you three times the amount of effort to do the job versus the professional. With a data center relocation project, having the right packing materials, rigging equipment, trucks, and so forth are all necessary for a job well done.

Here’s an overview of how to find and hire a company to help with the data center relocation process:

Step 1: Finding a data center moving company.

Nearly every area has a company that specializes in relocating computer equipment. They can be found in the Yellow Pages, via an online search, or by asking for referrals from colleagues. The hard part is making sure you’ve found a qualified company that specializes in data center moves. Checking references is vitally important. A general rule of thumb I’ve seen people use is “The bigger the companies they work for, the better the moving company is,” but this isn’t always the case.

Step 2: Checking qualifications.

When lost or damaged equipment can mean downtime and escalating costs, the need to choose carefully is clear. The most important thing to look for is experience. How many years has the data center relocation company been in business? What’s the combined experience of their team? Have they worked on projects of similar scale to your own?

Ask specific questions to make sure they perform these services on a regular basis. What are the company’s best practices and proven methodologies? What resources and support does the company offer? How would they coordinate all aspects of the move from start to finish?

Remember that the moving company is only one part of the integrated team for an effective relocation. Be sure to involve key stakeholders in the process, including your IT, business and facilities staff as well as third-party vendors. The project team should include:

  • your internal IT and facilities staff,
  • an overall project manager (internal or external),
  • an IT services company to assist in the marking, tagging, un-cabling, un-racking, re-racking, and re-cabling of all IT infrastructure
  • a computer room design firm to verify the power and cooling capacity on the other side.

For a more detailed guide to hiring a firm, download my white paper, “Tips for Hiring a Data Center Consultant.”

(Next post: Establishing an overall plan for your data center move…)

Thursday, July 26, 2007

Reflections on the Data Center in a Box

Recently Jack Lyne, Executive Editor at Site Selection magazine, contacted me regarding Sun Microsystems' new Project Blackbox, colloquially dubbed the “data center in a box.” (Check out his article: “Sun’s Blackbox: A Moveable Feast for Data Centers?”) Jack’s questions led me to reflect on the current rate of adoption I’ve observed for the mobile date center.

While the energy-efficient technology offers the benefit of rapid deployment, for many companies the Blackbox does not provide a feasible alternative to the traditional brick-and-mortar data center. Similar solutions have been equally ineffective. APC’s “data center on wheels” never seemed to produce the impact that was desired and it was a neutral processing environment.

The limitation for most companies which would be in the market for this technology is not space as much as it is access to adequate power and cooling. Despite its all-in-one packaging, the Blackbox does not mitigate the need for power and/or chilled water which are two primary cost drivers of any computer room project. At best the Blackbox is a Tier I data center as defined by the Uptime Institute’s Standard, which can be built just about anywhere for equal or less money.

The Data Center Journal summed up the sentiment quite nicely:

“A mobile data center is nothing new. We have seen APC deliver a mobile data center on wheels. We have seen manufacturers such as iFortress or Rittal’s Lampertz product line which both provide heavy duty and easily constructed mobile data center facilities. ...

“The Sun “Data Center in a Box” provides the industry with another choice that can meet the need of the consumer, but is it needed and will the industry embrace it or will it become a small niche market product? Time will tell.”

Monday, July 16, 2007

PTS Weighs in on Data Center Humidity Issues

Mark Fontecchio’s recent article on data center humidity issues at SearchDataCenter.com not only created buzz in the data center blogs, but generated quite a discussion amongst our team at PTS Data Center Solutions.

Data center humidity range too strict?

While some data center professionals find the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE)’s recommended relative humidity of 40% to 55% to be restrictive, I think the tight ASHRAE standards have to be adhered to until further research proves otherwise.

PTS’s engineering manager, Dave Admirand, PE notes that the reason the telecommunications industry is able to operate within a wider humidity range of 35% to 65% is because of their very strict grounding regimen. In a well grounded system an electrical charge has no place build up and is more readily dissipated to ground. Mr. Admirand recalls his days at IBM when the ‘old timers’ would swear by wearing leather soled shoes (conductive enough to make connectivity to the grounded raised floor) and/or washing their hands (presumably to carry dampness with them) prior to entering their data centers to avoid a shock from discharging the build up on themselves onto a surface.

Relative humidity vs. absolute humidity

While I think both relative and absolute humidity should be considered, many in the industry are still designing to and measuring relative humidity. PTS mechanical engineer John Lin, PhD points out that only two values of air are independent and data center professionals have to control the air temperature. While we can only control one of the humidity values, it is possible to calculate the absolute humidity (humidity ratio) based on air temperature and relative humidity. Therefore, data centers are fine as long as the both temperature and relative humidity are within the permissible range.

Coy Stine’s example is right on the mark. The high temperature delta between inlet and outlet air that can be realized in some dense IT equipment may lead to some very low humidity air inside critical electronics which can lead to electrostatic discharge (ESD). My experience, however, is that I am not encountering data loss scenarios at the estimated 50-100 data centers I visit each year simply due to ESD concerns. This leads me to believe that there is a slight tendency to ‘make mountains over mole hills’ regarding ESD.

After further reflection on Stine’s scenario about the low relative humidity air at the back of the servers, I was reminded again by Mr. Admirand that it won’t much make a difference since that air is being discharged back to the CRAC equipment. Furthermore, even if the air is recirculated back to the critical load’s inlet the absolute moisture content of the air remains constant and the mixed air temperature is not low enough to cause a problem. John Lin contends this is the reason why we only control temperature and relative humidity.

It’s been our stance at PTS that the most important goal of humidity control is to regulate condensation. The only real danger to very warm, high moisture content air is that it will condense easily should its temperature drop below the dew point temperature.

Separate data center humidity from cooling units?

I have no doubt that R. Stephen Spinazzola’s conclusion that it is cheaper to operate humidity control as a stand-alone air handler is on target. However, experience dictates the approach is an uphill sell since the savings are indirect, realized only as part of operational savings. The reality is that the upfront capital cost is greater to deploy these systems, especially in a smaller environment where it is harder to control humidity anyway.

Humidity control is very dependent on the environment for which you are designing a system. In a large data center, it is actually easier to do because most of the entire building is presumably data center controlled environment. However, for SMEs with tenant space computer rooms the idea of humidity control is much more difficult since it is dictated by the overall building humidity environment. At best, a computer room is a giant sponge – the question is whether you are gaining from or giving off water to the rest of the building.

The design and construction of a data center or computer room, including its cooling system, should meet the specific environmental needs of its equipment. For now, our approach at PTS Data Center Solutions has been to utilize humidity control within DX units for both small and large spaces. Conversely, we control humidity separately when deploying in-row, chilled water cooling techniques for higher density cooling applications in smaller sites.

For more information on data center humidity issues, read “Changing Cooling Requirements Leave Many Data Centers at Risk” or visit our Computer Room Cooling Systems page.

Friday, June 08, 2007

Recommended Reading: “The New Data Center”

There’s an interesting piece up at NetworkWorld.com on the leading trends in data center storage, titled “The New Style of Storage.” It touches on a variety of topics including e-discovery, eco-friendly storage technology, and virtualization.

This is part 3 in a six-part series that examines the latest technologies and practices for building “the New Data Center.” Taken together, it’s a bit of a long-read, but well worth the time. Check it out when you have a chance.

Be sure to take a look at parts 1 and 2, as well:
Part 1, The New Data Center –
Trends, Products & Practices for Next-Gen IT Infrastructure
Part 2, Defending Your Net – Tools and Tactics for Enterprise IT Security

Want to weigh in on what you’ve read? I’d love to hear it. Post your thoughts on the comments page for this entry.

Tuesday, May 15, 2007

High Density Devices Strain Data Center Resources

A few weeks back I commented on the current boom in data center development. Spurring this trend is the growing need for greater processing power and increased data storage capacity, as well as new Federal regulations which call for better handling and storage of data.

In the scramble to keep up with these demands, the deployment of high density devices and blade servers has become an attractive option for many data center managers. However, a new report from the Aperture Research Institute indicates that “many facilities are not able to handle the associated demand for power and cooling.”

The study, based on interviews with more than 100 data center professionals representing a broad spectrum of industries, reveals that the deployment of high density equipment is creating unforeseen challenges within many data centers.

Highlights of the report include:

  • While the majority of data center managers are currently running blade servers in their facilities, traditional servers still comprise the bulk of new server purchases. Mixing blade and non-blade servers in such small quantities can unnecessarily complicate the data center environment and make maintenance more difficult.
  • The rising power density of racks makes them more expensive to operate and more difficult to cool. More than one-third of the respondents said their average power density per rack was over 7KW, a scenario that setting the facilities up for potential data center cooling issues and unexpected downtime.
  • Respondents report that the majority of data center outages were caused by human error and improper failover.
  • What’s really jaw-dropping is that while more than 22% of outages were due to overheating, 21% of respondents admit that they don’t know the maximum power density of their racks. The report points out that “[o]ver 8% of respondents are therefore using high-density devices without tracking power density in a rack, dramatically increasing the potential for outages.”

High density equipment can help data centers keep up with business demands, but only if you can keep things running smoothly. Proper management of power and cooling is essential for meeting the end user's availability expectations. For more information on the various cooling challenges posed by high density rack systems, please visit our Data Center Cooling Challenges page at PTSDCS.com.