Friday, August 24, 2007

Plan Your Data Center Move (Part 2 of 2)

A successful data center relocation starts with a good plan. By placing emphasis on pre-design and planning, you can achieve an optimal solution to meet the demands of your data center move. Here are some key points to address when developing your own data center relocation strategy:

What equipment really needs to move?

An equipment migration is the perfect time to make network and network security improvements, phase out old server and storage platforms, and undertake a virtualization project to minimize the number of servers.

Is the new site’s support infrastructure prepared to accept the new load?

Is there enough UPS, cooling, power distribution, floor weight capacity, etc.? Is the data cabling strategy the same or will you be making changes? It’s helpful to retain a computer room design consultant to verify the load capacity and redundancy constraints of the new site. If working with a pre-existing space, the new computer room should be re-commissioned.

Establish corporate buy-in.

Clearly communicate the timeline of the project with everyone in the company – management and employees alike.

Identify, mark, tag, and document everything – twice!

Every piece of equipment from subfloor to ceiling – be it a cabinet, rack, power cable, power strip, patch cable, data cable, bracket, nut, or bolt – needs to be accounted for using a numbering convention that will ensure everything goes back together exactly as it came apart.

Develop a schedule with enough time built in for contingencies.

Allow yourself a sufficient margin of error in case there’s a hold-up at some point during the process. Build extra time in at the end of the data center relocation schedule and don’t try to do too much at one time.


For more advice on data center migration, check out "Tips For Moving Your Data Center" at Processor.com.

Monday, August 20, 2007

Plan Your Data Center Move (Part 1 of 2)

In my post “Tips for Handling Your Data Center Relocation,” I discussed some basic strategies for streamlining a data center move. Since then, I’ve received a few requests for more insight into handling the data center relocation process. In this post I’ll address whether it’s necessary to call in the pros and how to pick a data center moving company.

While in some cases the in-house team can handle the move themselves, most enterprises need a little extra help. I liken it to attempting a plumbing project on your own. The tools you need to do the job most effectively are so specialized and you rarely have them – in most situations, it would take you three times the amount of effort to do the job versus the professional. With a data center relocation project, having the right packing materials, rigging equipment, trucks, and so forth are all necessary for a job well done.

Here’s an overview of how to find and hire a company to help with the data center relocation process:

Step 1: Finding a data center moving company.

Nearly every area has a company that specializes in relocating computer equipment. They can be found in the Yellow Pages, via an online search, or by asking for referrals from colleagues. The hard part is making sure you’ve found a qualified company that specializes in data center moves. Checking references is vitally important. A general rule of thumb I’ve seen people use is “The bigger the companies they work for, the better the moving company is,” but this isn’t always the case.

Step 2: Checking qualifications.

When lost or damaged equipment can mean downtime and escalating costs, the need to choose carefully is clear. The most important thing to look for is experience. How many years has the data center relocation company been in business? What’s the combined experience of their team? Have they worked on projects of similar scale to your own?

Ask specific questions to make sure they perform these services on a regular basis. What are the company’s best practices and proven methodologies? What resources and support does the company offer? How would they coordinate all aspects of the move from start to finish?

Remember that the moving company is only one part of the integrated team for an effective relocation. Be sure to involve key stakeholders in the process, including your IT, business and facilities staff as well as third-party vendors. The project team should include:

  • your internal IT and facilities staff,
  • an overall project manager (internal or external),
  • an IT services company to assist in the marking, tagging, un-cabling, un-racking, re-racking, and re-cabling of all IT infrastructure
  • a computer room design firm to verify the power and cooling capacity on the other side.

For a more detailed guide to hiring a firm, download my white paper, “Tips for Hiring a Data Center Consultant.”

(Next post: Establishing an overall plan for your data center move…)

Thursday, July 26, 2007

Reflections on the Data Center in a Box

Recently Jack Lyne, Executive Editor at Site Selection magazine, contacted me regarding Sun Microsystems' new Project Blackbox, colloquially dubbed the “data center in a box.” (Check out his article: “Sun’s Blackbox: A Moveable Feast for Data Centers?”) Jack’s questions led me to reflect on the current rate of adoption I’ve observed for the mobile date center.

While the energy-efficient technology offers the benefit of rapid deployment, for many companies the Blackbox does not provide a feasible alternative to the traditional brick-and-mortar data center. Similar solutions have been equally ineffective. APC’s “data center on wheels” never seemed to produce the impact that was desired and it was a neutral processing environment.

The limitation for most companies which would be in the market for this technology is not space as much as it is access to adequate power and cooling. Despite its all-in-one packaging, the Blackbox does not mitigate the need for power and/or chilled water which are two primary cost drivers of any computer room project. At best the Blackbox is a Tier I data center as defined by the Uptime Institute’s Standard, which can be built just about anywhere for equal or less money.

The Data Center Journal summed up the sentiment quite nicely:

“A mobile data center is nothing new. We have seen APC deliver a mobile data center on wheels. We have seen manufacturers such as iFortress or Rittal’s Lampertz product line which both provide heavy duty and easily constructed mobile data center facilities. ...

“The Sun “Data Center in a Box” provides the industry with another choice that can meet the need of the consumer, but is it needed and will the industry embrace it or will it become a small niche market product? Time will tell.”

Monday, July 16, 2007

PTS Weighs in on Data Center Humidity Issues

Mark Fontecchio’s recent article on data center humidity issues at SearchDataCenter.com not only created buzz in the data center blogs, but generated quite a discussion amongst our team at PTS Data Center Solutions.

Data center humidity range too strict?

While some data center professionals find the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE)’s recommended relative humidity of 40% to 55% to be restrictive, I think the tight ASHRAE standards have to be adhered to until further research proves otherwise.

PTS’s engineering manager, Dave Admirand, PE notes that the reason the telecommunications industry is able to operate within a wider humidity range of 35% to 65% is because of their very strict grounding regimen. In a well grounded system an electrical charge has no place build up and is more readily dissipated to ground. Mr. Admirand recalls his days at IBM when the ‘old timers’ would swear by wearing leather soled shoes (conductive enough to make connectivity to the grounded raised floor) and/or washing their hands (presumably to carry dampness with them) prior to entering their data centers to avoid a shock from discharging the build up on themselves onto a surface.

Relative humidity vs. absolute humidity

While I think both relative and absolute humidity should be considered, many in the industry are still designing to and measuring relative humidity. PTS mechanical engineer John Lin, PhD points out that only two values of air are independent and data center professionals have to control the air temperature. While we can only control one of the humidity values, it is possible to calculate the absolute humidity (humidity ratio) based on air temperature and relative humidity. Therefore, data centers are fine as long as the both temperature and relative humidity are within the permissible range.

Coy Stine’s example is right on the mark. The high temperature delta between inlet and outlet air that can be realized in some dense IT equipment may lead to some very low humidity air inside critical electronics which can lead to electrostatic discharge (ESD). My experience, however, is that I am not encountering data loss scenarios at the estimated 50-100 data centers I visit each year simply due to ESD concerns. This leads me to believe that there is a slight tendency to ‘make mountains over mole hills’ regarding ESD.

After further reflection on Stine’s scenario about the low relative humidity air at the back of the servers, I was reminded again by Mr. Admirand that it won’t much make a difference since that air is being discharged back to the CRAC equipment. Furthermore, even if the air is recirculated back to the critical load’s inlet the absolute moisture content of the air remains constant and the mixed air temperature is not low enough to cause a problem. John Lin contends this is the reason why we only control temperature and relative humidity.

It’s been our stance at PTS that the most important goal of humidity control is to regulate condensation. The only real danger to very warm, high moisture content air is that it will condense easily should its temperature drop below the dew point temperature.

Separate data center humidity from cooling units?

I have no doubt that R. Stephen Spinazzola’s conclusion that it is cheaper to operate humidity control as a stand-alone air handler is on target. However, experience dictates the approach is an uphill sell since the savings are indirect, realized only as part of operational savings. The reality is that the upfront capital cost is greater to deploy these systems, especially in a smaller environment where it is harder to control humidity anyway.

Humidity control is very dependent on the environment for which you are designing a system. In a large data center, it is actually easier to do because most of the entire building is presumably data center controlled environment. However, for SMEs with tenant space computer rooms the idea of humidity control is much more difficult since it is dictated by the overall building humidity environment. At best, a computer room is a giant sponge – the question is whether you are gaining from or giving off water to the rest of the building.

The design and construction of a data center or computer room, including its cooling system, should meet the specific environmental needs of its equipment. For now, our approach at PTS Data Center Solutions has been to utilize humidity control within DX units for both small and large spaces. Conversely, we control humidity separately when deploying in-row, chilled water cooling techniques for higher density cooling applications in smaller sites.

For more information on data center humidity issues, read “Changing Cooling Requirements Leave Many Data Centers at Risk” or visit our Computer Room Cooling Systems page.

Friday, June 08, 2007

Recommended Reading: “The New Data Center”

There’s an interesting piece up at NetworkWorld.com on the leading trends in data center storage, titled “The New Style of Storage.” It touches on a variety of topics including e-discovery, eco-friendly storage technology, and virtualization.

This is part 3 in a six-part series that examines the latest technologies and practices for building “the New Data Center.” Taken together, it’s a bit of a long-read, but well worth the time. Check it out when you have a chance.

Be sure to take a look at parts 1 and 2, as well:
Part 1, The New Data Center –
Trends, Products & Practices for Next-Gen IT Infrastructure
Part 2, Defending Your Net – Tools and Tactics for Enterprise IT Security

Want to weigh in on what you’ve read? I’d love to hear it. Post your thoughts on the comments page for this entry.

Tuesday, May 15, 2007

High Density Devices Strain Data Center Resources

A few weeks back I commented on the current boom in data center development. Spurring this trend is the growing need for greater processing power and increased data storage capacity, as well as new Federal regulations which call for better handling and storage of data.

In the scramble to keep up with these demands, the deployment of high density devices and blade servers has become an attractive option for many data center managers. However, a new report from the Aperture Research Institute indicates that “many facilities are not able to handle the associated demand for power and cooling.”

The study, based on interviews with more than 100 data center professionals representing a broad spectrum of industries, reveals that the deployment of high density equipment is creating unforeseen challenges within many data centers.

Highlights of the report include:

  • While the majority of data center managers are currently running blade servers in their facilities, traditional servers still comprise the bulk of new server purchases. Mixing blade and non-blade servers in such small quantities can unnecessarily complicate the data center environment and make maintenance more difficult.
  • The rising power density of racks makes them more expensive to operate and more difficult to cool. More than one-third of the respondents said their average power density per rack was over 7KW, a scenario that setting the facilities up for potential data center cooling issues and unexpected downtime.
  • Respondents report that the majority of data center outages were caused by human error and improper failover.
  • What’s really jaw-dropping is that while more than 22% of outages were due to overheating, 21% of respondents admit that they don’t know the maximum power density of their racks. The report points out that “[o]ver 8% of respondents are therefore using high-density devices without tracking power density in a rack, dramatically increasing the potential for outages.”

High density equipment can help data centers keep up with business demands, but only if you can keep things running smoothly. Proper management of power and cooling is essential for meeting the end user's availability expectations. For more information on the various cooling challenges posed by high density rack systems, please visit our Data Center Cooling Challenges page at PTSDCS.com.

Tuesday, April 24, 2007

NEW from PTS: Computational Fluid Dynamic (CFD) Services

Behind the scenes at PTS Data Center Solutions, we’re always working to enhance our products, services and solutions in order to provide our clients with designs that offer optimum manageability and performance.

Our newest consulting service utilizes powerful 3-D Computational Fluid Dynamic (CFD) software to facilitate the design, operational analysis and maintenance of our clients’ data centers and computer rooms.

Here is an overview of the multiple applications of our CFD Services:

  • CFD Modeling as a Design Tool

By building CFD models of a mission critical space, engineers can quickly and efficiently review multiple design options. This allows for early detection of potential problems with air flow and heat distribution, thus permitting designers to provide an optimum solution.

  • CFD Operational Baseline Service

After the data center’s IT infrastructure has been populated, PTS uses CFD modeling to map the site and analyze the data center cooling characteristics down to the equipment level. By doing so, we can determine how variations in the position and design of equipment, as well as other factors, affect the room’s cooling profile.

  • Maintaining a CFD Modeled Computer Room

To ensure the high performance and manageability of a mission critical site, it is important to understand the effect that equipment changes will have before implementation takes place. Through CFD visualization, simulation and analysis, PTS’s consulting team can predict the impact of operational changes on the temperatures in the room. From there PTS is able to make recommendations for avoiding potential problems while planning for future growth. As part of the CFD modeling process, PTS maintains a complete asset inventory log as well as a detailed change order log, ensuring that infrastructure changes are tracked correctly.

If you’re interested in learning more about this data center consulting service, please visit our Computational Fluid Dynamic (CFD) Services page.

Request a Quote

To request a quote for PTS's CFD Baseline and/or Maintenance Services, please send an email to CFD@PTSdcs.com with the following information:

  1. The physical address of the location
  2. The square footage of the computer room to be modeled
  3. The number of server cabinets, racks, and stand-alone pieces of equipment in the computer room
  4. The number of IT infrastructure devices (servers, switches, routers, storage arrays, etc.) the computer room supports

Wednesday, April 11, 2007

Keeping It Clean in the Data Center

Spring is here. It’s the time of year when people throw open the windows, pull out the dust rags and fire up their vacuums for a burst of Spring Cleaning. This annual household ritual serves as a good reminder of the importance of regular cleanings within the data center environment.

Regularly scheduled site cleanings help to keep the data center environment free of dust, dirt and other particulates that can harm your operating systems and create health risks for employees. Particulates circulating within a data center can accumulate and interfere with electronics causing a variety of potential problems, including media errors and data loss.

A good rule of thumb is to schedule data center cleanings on a quarterly basis, or when particulate counts exceed the standards set by ISO 14644-8 or ISO 14644-9. By sticking to this cleaning routine, companies optimize the performance of data center equipment while cutting down on the cost of repairs. When you compare the cost of regular cleaning sessions to the overall financial investment in your data center, it’s a smart buy.

Choosing a Data Center Cleaning Service

Don’t grab a broom and dustpan just yet. While it’s good to clean both houses and data centers on a regular basis, that’s where most of the similarities end. Cleaning a data center is a delicate process that requires the services of highly-trained professionals who know how to safely handle mission critical equipment.

To help you select the right cleaning service, here are some tips:

  • Check the company’s references. In addition to the quality of the service, you want to make sure the company has experience dealing with facilities that are similar to your own.
  • Makes sure the company is insured for damages caused during the cleaning process. If an accident occurs, are you protected?
  • Evaluate the experience and training of the cleaning crew. For instance, are they trained to provide services per the requirements of International Standard ISO 14644?
  • Review the company’s cleaning methods to see if they use HEPA filtration vacuums and chemicals that are safe for use with electronics systems.
  • Be clear about your expectations for the service and establish parameters for cleaning. Will the technicians move equipment? Will they clean the sub-floor or above each rack? Are certain areas off-limits? What’s included in the service?
  • Look for a cleaning service that offers availability that meets your needs. In addition to yearly cleanings, will they be available for daily maintenance activities or in the event of an emergency?

Friday, April 06, 2007

Data Center Cooling: Approaches to Avoid

Data center cooling problems can compromise availability and increase costs. The ideal data center cooling system requires an adaptable, highly-available, maintainable, manageable, and cost effective design.

When working to design an effective data center cooling system, there are a number of commonly deployed data center cooling techniques that should not be implemented. They are:

  • Reducing the CRAC supply air temperature to compensate for hot spots
  • Using cabinet and/or enclosures with either roof-mounted fans and/or under-cabinet floor cut-outs, without internal baffles
  • Isolating high-density RLUs

Reducing CRAC Temperatures

Simply making the air colder will not solve a data center cooling problem. The root of the problem is either a lack of cold air volume to the equipment inlet or it is lack of sufficient hot return air removal from the outlet of the equipment. All things equal, any piece of equipment with internal fans will cool it self. Typically, equipment manufactures do not even specify an inlet temperature. They usually provide only a percentage of clear space the front and rear of the equipment must be maintained to ensure adequate convection.

Roof-mounted cabinet fans

CFD analysis conclusively proves that roof-mounted fans and under-cabinet air cut-outs will not sufficiently cool a cabinet unless air baffles are utilized to isolate the cold air and hot air sections. Without baffles, roof-mounted fan will draw not only the desired hot air in the rear, but also a volume of cold air from the front prior to being drawn in by the IT load. This serves only to cool the volume of hot air which we have previously established as a bad strategy. Similarly, providing a cut-out in the access floor directly beneath the cabinet will provide cold air to the inlet of the IT loads, however, it will also leak air into the hot aisle. Again, this only serves to cool the hot air.

Isolating high-density equipment

While isolating high-density equipment isn’t always a bad idea, special considerations must be made. Isolating the hot air is in fact, a good idea. However, the problem is in achieving a sufficient volume of cold air from the raised floor. Even then, assuming enough perforated floor tiles are dedicated to provide a sufficient air volume, too much of the hot air re-circulates from the back of the equipment to the front air inlet and combines with the cold air.

For more information on data center cooling, please download my newest White Paper, Data Center Cooling Best Practices, at http://www.ptsdcs.com/white_papers.asp. You can also view additional publications such as the following at our Vendor White Papers page:

Thursday, March 22, 2007

The New Data Center Boom

Across the country, data center development is booming. Companies, including major players like Microsoft and Google, are buying up acres of land with the intent of building new data centers.

This rapid growth is, at least in part, spurred by the requirements of the Sarbanes-Oxley Act and the Health Insurance Portability & Accountability Act (HIPAA), which call for better handling and storage of data. Companies are also responding to the nationwide push to establish energy efficient data centers. In order to accommodate the state-of-the-art, next generation data centers, companies simply need more space than their current facilities can provide.


Data Center Site Selection

For companies seeking to develop a new data center facility, high-quality site selection is of the utmost importance. By choosing a site location wisely, companies can save both time and money, while achieving scalability, flexibility and high availability.

Choosing a site that minimizes the natural and man made threats to continuous operation is the first step in provisioning a new data center. There are many factors to consider, including:

  • Natural Hazard Threats
  • Physical Location Threats
  • Terrorist Activity Threats
  • Environmental Contamination Threats
  • Site Accessibility
  • Amenities Access

It is interesting to note that the priority level of these factors is highly changeable. For instance, a decade ago it would have been more common for companies to seek site locations that with close proximity to major cities and airports. However, in the wake of September 11th, data centers are more likely to spring up in smaller cities, reducing the likelihood of damage from terrorist attacks, but most especially in those areas of the country that have the lowest operation costs including utility rates, land acquisition costs, labor rates, tax rates, and cost-of-living expenses.

To help navigate the complex process of site selection, many companies employ data center consultants for assistance in selecting an appropriate geography on which to locate their data center. Site selection services are the optimal way to ensure your mission critical facility is set up in both a location and a building that can support constant availability.