Wednesday, December 26, 2007

Data Center Wish Lists

In the spirit of the holiday season, the folks at SearchDataCenter.com have taken a look at what’s on data center managers’ holiday wish lists. It’s a fun read – check it out when you have a minute.

Here are some of the highlights:

“Extra processor horsepower”

“Information on building new data centers to carry us through the next 20 years and beyond”

“A pill that we could give folks in the physical plant and IT that would give them an understanding of what the data center is and what it takes to operate one under best practices”

And all I thought to ask for was a Nintendo Wii. I’ll have to be more creative next year.

No matter what you’re wishing for this year, the team at PTS Data Center Solutions wishes you a happy holiday season and a fantastic New Year!

Monday, December 17, 2007

Embracing the Expanding Role of IT in Business

I was recently asked by Processor Magazine to answer a few questions about IT’s role in business, and it occurred to me that now might be a perfect time to give a “shout out” to the IT folks out there. A sort of gift, if you will, in the spirit of the season.

First, let me dispel an all too common myth – IT is not just a group of “geeks” typing code all day in the server closet down the hall. Far from it. As technology continues marching forward, IT’s role and its importance to the bottom line continues to grow. And don’t just take my word for it – according to the MIT Sloan Management Review, Information and Information Technology have become the fifth major resource available to executives for shaping an organization, alongside people, money, material and machines.[1] In fact, we’re witnessing all businesses, from large to small, expanding what was traditionally thought of as IT, to a broader corporate responsibility known as Information Systems (IS). This new IS paradigm is responsible for the development and implementation of business processes (BP) throughout an organization. These BP’s are often technology based and therefore the logical domain of the technology leaders of the organization.

IT, or “IS” I should say, is responsible for much more than just fixing uncooperative computers. IS deals with the use of infrastructure including PCs, servers, storage, network, security, communications, and related software to manipulate, store, protect, process, transmit, and retrieve information securely. Today, the IT umbrella is quite large and covers many disciplines. IT professionals perform various duties ranging from installing applications, implementing LAN/WAN networks, designing information databases, and managing communications. A few of the duties that IT professionals perform may include data management, networking, network security, deploying infrastructure, managing communications, database and software design & implementation, as well as the monitoring and administration of entire systems.

So what’s my point, you ask? Simply to reinforce the value of IT and help shift the corporate perception of IT as a “necessary evil” to IT as an important value center that can help businesses and employees to accomplish more, with greater accuracy, in less time, while utilizing less company resources. For 2008, I encourage companies to make a New Year’s resolution to embrace IT and look for ways to make the most of this extremely valuable resource.

[1] Rockart et. Al (1996) "Eight imperatives for the new IT organization," Sloan Management Review.

Monday, October 29, 2007

Server Cabinet Organization Tips

Just in time for Halloween, check out this classic server room cabling nightmare at Tech Republic. Scary stuff.

Good data center design is a combination of high-level conceptual thinking and strategic planning, plus close attention to detail. Obviously, things like the cooling system and support infrastructure are critical to maintaining an always-available data center, but smaller things like well organized server cabinets can also contribute to the overall efficiency of a data center or computer room. That being said, I thought I’d share a few of our guidelines and best practices for organizing your cabinets.

In no particular order:

1. Place heavier equipment on the bottom, lighter equipment towards the top

2. Use blanking plates to fill equipment gaps to prevent hot air from re-circulating back to the front

3. Use a cabinet deep enough to accommodate cable organization and airflow in the rear of the cabinet

4. Use perforated front and rear doors when using the room for air distribution

5. Make sure doors can be locked for security

6. PTS prefers using a patch panel in each cabinet for data distribution. We typically install it in the top rear U’s, but are experimenting with vertical rear channel patch cable distribution

7. PTS prefers using vertical power strips in a rear channel of the cabinet with short power cords for server-to-power-strip distribution

8. While they are convenient, do not use cable management arms that fold the cables on the back of the server as they impede outlet airflow of the server

9. Don’t use roof fans without front-to-rear baffling. They suck as much cold air from the front as they do hot air from the rear.

10. Monitor air inlet temperature ¾ of the way up the front of the cabinet

11. Use U-numbered vertical rails to make mounting equipment easier

12. Have a cabinet numbering convention and floor layout map

13. Use color-coded cabling for different services

14. Separate power and network cabling distribution on opposite sides of the cabinet

15. PTS often uses the tops of the cabinet to facilitate cabinet-to-cabinet power and data cable distribution

As you can see, the little things do make a difference. And by instituting some or all of these, you’ll be one step closer to 24-7 availability.

Wednesday, October 24, 2007

The Role of Sprinklers in Computer Room Fire Protection

A number of clients have asked us about the viability of replacing their ‘wet’ sprinkler systems with a dry-type fire suppression system, such as FM-200. Not many IT personnel understand the role of water-based fire suppression systems, but all realize the potential for water in the data processing environment to be a “bad thing.”
 
The short answer is that sprinkler systems protect the building and dry-type systems protect the equipment. In most cases a dry-type system cannot take the place of a sprinkler system, it can only be installed in addition to it. At the end of the day, the local fire inspection is the authority and has jurisdiction over what is permissible. This is the reason that pre-action sprinkler systems are primarily used for computer room fire protection.
 
That being said, fire prevention provides more protection against damage than any type of detection or suppression equipment available. For Tier I and Tier II computer rooms, PTS often recommends installing only a pre-action sprinkler system activated by a photo-electric smoke detection system and forgo a dry-type system and VESDA system. We find the most effective strategy is to emphasize prevention and early detection. This allows the client to maximize availability by investing in solutions for areas of higher risk, such as fully redundant power and cooling systems.
 
For more information on fire protection, read our vendor white paper “Mitigating Fire Risks in Mission Critical Facilities,” which provides a clear understanding of the creation, detection, suppression and prevention of fire within mission critical facilities. Fire codes for Information Technology environments are discussed. Best practices for increasing availability are provided.

Friday, September 28, 2007

Blogging in Good Company…

Happy Friday, everyone!

A little over a year ago, Rich Miller at Data Center Knowledge put together a list of data center-related blogs, some of which have now become part of my regular reading habit. (Thanks, Rich, for including the PTS blog in that list.)

Expanding on Data Center Knowledge’s list, here are a few blogs that I try to keep tabs on:
* Cisco’s Data Center Networks
* Virtual Graffiti’s APCGuard
* John Rath’s Data Center Information
* SearchDataCenter.com’s Server Specs
* Matt Stansberry’s SearchDataCenter.com Editorial Blog
* CEO Jonathan Schwartz’s Sun Microsystems Blog
* DD’s Eco Notes (another Sun blog)
* The Mainframe Blog
* The Next Generation Data Center Blog
* Various ITToolbox Blogs

Check them out when you have time.

By the way, the PTS Data Center Design blog has joined the MyBlogLog community. It’s a great tool for connecting with readers and authors of sites you enjoy. If you’re a MyBlogLog member, leave a message for me – it’s always great to hear from readers!

Tuesday, September 25, 2007

It Isn’t Easy Being Green: Companies forgo eco-friendly solutions

A number of major corporations in the past year, including News Corp and Citigroup, announced plans to launch significant environmental initiatives. These corporations are paying particular attention to sustainability and are taking steps to build green data centers, in addition to reducing their carbon footprint in other ways.

To meet this demand, industry leaders such as Sun Microsystems, HP and IBM added energy-efficient servers and other eco-friendly technology solutions to their offerings. However, according to Going Green: Vendors Deliver Solutions to Save Money – the World:

“[E]nd users won’t rush to replace their infrastructure with greener technology, says Blair Pleasant, president and principal analyst of research firm Commfusion LLC. For one thing, there are budgets to consider. Pleasant likens the principle to the car industry — many consumers might want to drive expensive hybrids but aren’t ready to replace their perfectly serviceable, gas-powered vehicles.

Plus, there’s some skepticism that environmentally friendly systems might not work as well as familiar, existing networks. Companies “are going to have to prove that the new technologies or systems are every bit as good as what [end users] already have,” Pleasant says.”


Despite the eco-friendly peer pressure, many companies have and will continue to forgo potential long-term savings over increased capital expenditures until the premium to do so diminishes. It will be interesting to see how this plays out as the green movement continues to build steam and as the media continues to barrage us with global warming news. If hybrid vehicles really start to take off, will green data centers too?

Friday, August 24, 2007

Plan Your Data Center Move (Part 2 of 2)

A successful data center relocation starts with a good plan. By placing emphasis on pre-design and planning, you can achieve an optimal solution to meet the demands of your data center move. Here are some key points to address when developing your own data center relocation strategy:

What equipment really needs to move?

An equipment migration is the perfect time to make network and network security improvements, phase out old server and storage platforms, and undertake a virtualization project to minimize the number of servers.

Is the new site’s support infrastructure prepared to accept the new load?

Is there enough UPS, cooling, power distribution, floor weight capacity, etc.? Is the data cabling strategy the same or will you be making changes? It’s helpful to retain a computer room design consultant to verify the load capacity and redundancy constraints of the new site. If working with a pre-existing space, the new computer room should be re-commissioned.

Establish corporate buy-in.

Clearly communicate the timeline of the project with everyone in the company – management and employees alike.

Identify, mark, tag, and document everything – twice!

Every piece of equipment from subfloor to ceiling – be it a cabinet, rack, power cable, power strip, patch cable, data cable, bracket, nut, or bolt – needs to be accounted for using a numbering convention that will ensure everything goes back together exactly as it came apart.

Develop a schedule with enough time built in for contingencies.

Allow yourself a sufficient margin of error in case there’s a hold-up at some point during the process. Build extra time in at the end of the data center relocation schedule and don’t try to do too much at one time.


For more advice on data center migration, check out "Tips For Moving Your Data Center" at Processor.com.

Monday, August 20, 2007

Plan Your Data Center Move (Part 1 of 2)

In my post “Tips for Handling Your Data Center Relocation,” I discussed some basic strategies for streamlining a data center move. Since then, I’ve received a few requests for more insight into handling the data center relocation process. In this post I’ll address whether it’s necessary to call in the pros and how to pick a data center moving company.

While in some cases the in-house team can handle the move themselves, most enterprises need a little extra help. I liken it to attempting a plumbing project on your own. The tools you need to do the job most effectively are so specialized and you rarely have them – in most situations, it would take you three times the amount of effort to do the job versus the professional. With a data center relocation project, having the right packing materials, rigging equipment, trucks, and so forth are all necessary for a job well done.

Here’s an overview of how to find and hire a company to help with the data center relocation process:

Step 1: Finding a data center moving company.

Nearly every area has a company that specializes in relocating computer equipment. They can be found in the Yellow Pages, via an online search, or by asking for referrals from colleagues. The hard part is making sure you’ve found a qualified company that specializes in data center moves. Checking references is vitally important. A general rule of thumb I’ve seen people use is “The bigger the companies they work for, the better the moving company is,” but this isn’t always the case.

Step 2: Checking qualifications.

When lost or damaged equipment can mean downtime and escalating costs, the need to choose carefully is clear. The most important thing to look for is experience. How many years has the data center relocation company been in business? What’s the combined experience of their team? Have they worked on projects of similar scale to your own?

Ask specific questions to make sure they perform these services on a regular basis. What are the company’s best practices and proven methodologies? What resources and support does the company offer? How would they coordinate all aspects of the move from start to finish?

Remember that the moving company is only one part of the integrated team for an effective relocation. Be sure to involve key stakeholders in the process, including your IT, business and facilities staff as well as third-party vendors. The project team should include:

  • your internal IT and facilities staff,
  • an overall project manager (internal or external),
  • an IT services company to assist in the marking, tagging, un-cabling, un-racking, re-racking, and re-cabling of all IT infrastructure
  • a computer room design firm to verify the power and cooling capacity on the other side.

For a more detailed guide to hiring a firm, download my white paper, “Tips for Hiring a Data Center Consultant.”

(Next post: Establishing an overall plan for your data center move…)

Thursday, July 26, 2007

Reflections on the Data Center in a Box

Recently Jack Lyne, Executive Editor at Site Selection magazine, contacted me regarding Sun Microsystems' new Project Blackbox, colloquially dubbed the “data center in a box.” (Check out his article: “Sun’s Blackbox: A Moveable Feast for Data Centers?”) Jack’s questions led me to reflect on the current rate of adoption I’ve observed for the mobile date center.

While the energy-efficient technology offers the benefit of rapid deployment, for many companies the Blackbox does not provide a feasible alternative to the traditional brick-and-mortar data center. Similar solutions have been equally ineffective. APC’s “data center on wheels” never seemed to produce the impact that was desired and it was a neutral processing environment.

The limitation for most companies which would be in the market for this technology is not space as much as it is access to adequate power and cooling. Despite its all-in-one packaging, the Blackbox does not mitigate the need for power and/or chilled water which are two primary cost drivers of any computer room project. At best the Blackbox is a Tier I data center as defined by the Uptime Institute’s Standard, which can be built just about anywhere for equal or less money.

The Data Center Journal summed up the sentiment quite nicely:

“A mobile data center is nothing new. We have seen APC deliver a mobile data center on wheels. We have seen manufacturers such as iFortress or Rittal’s Lampertz product line which both provide heavy duty and easily constructed mobile data center facilities. ...

“The Sun “Data Center in a Box” provides the industry with another choice that can meet the need of the consumer, but is it needed and will the industry embrace it or will it become a small niche market product? Time will tell.”

Monday, July 16, 2007

PTS Weighs in on Data Center Humidity Issues

Mark Fontecchio’s recent article on data center humidity issues at SearchDataCenter.com not only created buzz in the data center blogs, but generated quite a discussion amongst our team at PTS Data Center Solutions.

Data center humidity range too strict?

While some data center professionals find the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE)’s recommended relative humidity of 40% to 55% to be restrictive, I think the tight ASHRAE standards have to be adhered to until further research proves otherwise.

PTS’s engineering manager, Dave Admirand, PE notes that the reason the telecommunications industry is able to operate within a wider humidity range of 35% to 65% is because of their very strict grounding regimen. In a well grounded system an electrical charge has no place build up and is more readily dissipated to ground. Mr. Admirand recalls his days at IBM when the ‘old timers’ would swear by wearing leather soled shoes (conductive enough to make connectivity to the grounded raised floor) and/or washing their hands (presumably to carry dampness with them) prior to entering their data centers to avoid a shock from discharging the build up on themselves onto a surface.

Relative humidity vs. absolute humidity

While I think both relative and absolute humidity should be considered, many in the industry are still designing to and measuring relative humidity. PTS mechanical engineer John Lin, PhD points out that only two values of air are independent and data center professionals have to control the air temperature. While we can only control one of the humidity values, it is possible to calculate the absolute humidity (humidity ratio) based on air temperature and relative humidity. Therefore, data centers are fine as long as the both temperature and relative humidity are within the permissible range.

Coy Stine’s example is right on the mark. The high temperature delta between inlet and outlet air that can be realized in some dense IT equipment may lead to some very low humidity air inside critical electronics which can lead to electrostatic discharge (ESD). My experience, however, is that I am not encountering data loss scenarios at the estimated 50-100 data centers I visit each year simply due to ESD concerns. This leads me to believe that there is a slight tendency to ‘make mountains over mole hills’ regarding ESD.

After further reflection on Stine’s scenario about the low relative humidity air at the back of the servers, I was reminded again by Mr. Admirand that it won’t much make a difference since that air is being discharged back to the CRAC equipment. Furthermore, even if the air is recirculated back to the critical load’s inlet the absolute moisture content of the air remains constant and the mixed air temperature is not low enough to cause a problem. John Lin contends this is the reason why we only control temperature and relative humidity.

It’s been our stance at PTS that the most important goal of humidity control is to regulate condensation. The only real danger to very warm, high moisture content air is that it will condense easily should its temperature drop below the dew point temperature.

Separate data center humidity from cooling units?

I have no doubt that R. Stephen Spinazzola’s conclusion that it is cheaper to operate humidity control as a stand-alone air handler is on target. However, experience dictates the approach is an uphill sell since the savings are indirect, realized only as part of operational savings. The reality is that the upfront capital cost is greater to deploy these systems, especially in a smaller environment where it is harder to control humidity anyway.

Humidity control is very dependent on the environment for which you are designing a system. In a large data center, it is actually easier to do because most of the entire building is presumably data center controlled environment. However, for SMEs with tenant space computer rooms the idea of humidity control is much more difficult since it is dictated by the overall building humidity environment. At best, a computer room is a giant sponge – the question is whether you are gaining from or giving off water to the rest of the building.

The design and construction of a data center or computer room, including its cooling system, should meet the specific environmental needs of its equipment. For now, our approach at PTS Data Center Solutions has been to utilize humidity control within DX units for both small and large spaces. Conversely, we control humidity separately when deploying in-row, chilled water cooling techniques for higher density cooling applications in smaller sites.

For more information on data center humidity issues, read “Changing Cooling Requirements Leave Many Data Centers at Risk” or visit our Computer Room Cooling Systems page.

Friday, June 08, 2007

Recommended Reading: “The New Data Center”

There’s an interesting piece up at NetworkWorld.com on the leading trends in data center storage, titled “The New Style of Storage.” It touches on a variety of topics including e-discovery, eco-friendly storage technology, and virtualization.

This is part 3 in a six-part series that examines the latest technologies and practices for building “the New Data Center.” Taken together, it’s a bit of a long-read, but well worth the time. Check it out when you have a chance.

Be sure to take a look at parts 1 and 2, as well:
Part 1, The New Data Center –
Trends, Products & Practices for Next-Gen IT Infrastructure
Part 2, Defending Your Net – Tools and Tactics for Enterprise IT Security

Want to weigh in on what you’ve read? I’d love to hear it. Post your thoughts on the comments page for this entry.

Tuesday, May 15, 2007

High Density Devices Strain Data Center Resources

A few weeks back I commented on the current boom in data center development. Spurring this trend is the growing need for greater processing power and increased data storage capacity, as well as new Federal regulations which call for better handling and storage of data.

In the scramble to keep up with these demands, the deployment of high density devices and blade servers has become an attractive option for many data center managers. However, a new report from the Aperture Research Institute indicates that “many facilities are not able to handle the associated demand for power and cooling.”

The study, based on interviews with more than 100 data center professionals representing a broad spectrum of industries, reveals that the deployment of high density equipment is creating unforeseen challenges within many data centers.

Highlights of the report include:

  • While the majority of data center managers are currently running blade servers in their facilities, traditional servers still comprise the bulk of new server purchases. Mixing blade and non-blade servers in such small quantities can unnecessarily complicate the data center environment and make maintenance more difficult.
  • The rising power density of racks makes them more expensive to operate and more difficult to cool. More than one-third of the respondents said their average power density per rack was over 7KW, a scenario that setting the facilities up for potential data center cooling issues and unexpected downtime.
  • Respondents report that the majority of data center outages were caused by human error and improper failover.
  • What’s really jaw-dropping is that while more than 22% of outages were due to overheating, 21% of respondents admit that they don’t know the maximum power density of their racks. The report points out that “[o]ver 8% of respondents are therefore using high-density devices without tracking power density in a rack, dramatically increasing the potential for outages.”

High density equipment can help data centers keep up with business demands, but only if you can keep things running smoothly. Proper management of power and cooling is essential for meeting the end user's availability expectations. For more information on the various cooling challenges posed by high density rack systems, please visit our Data Center Cooling Challenges page at PTSDCS.com.

Tuesday, April 24, 2007

NEW from PTS: Computational Fluid Dynamic (CFD) Services

Behind the scenes at PTS Data Center Solutions, we’re always working to enhance our products, services and solutions in order to provide our clients with designs that offer optimum manageability and performance.

Our newest consulting service utilizes powerful 3-D Computational Fluid Dynamic (CFD) software to facilitate the design, operational analysis and maintenance of our clients’ data centers and computer rooms.

Here is an overview of the multiple applications of our CFD Services:

  • CFD Modeling as a Design Tool

By building CFD models of a mission critical space, engineers can quickly and efficiently review multiple design options. This allows for early detection of potential problems with air flow and heat distribution, thus permitting designers to provide an optimum solution.

  • CFD Operational Baseline Service

After the data center’s IT infrastructure has been populated, PTS uses CFD modeling to map the site and analyze the data center cooling characteristics down to the equipment level. By doing so, we can determine how variations in the position and design of equipment, as well as other factors, affect the room’s cooling profile.

  • Maintaining a CFD Modeled Computer Room

To ensure the high performance and manageability of a mission critical site, it is important to understand the effect that equipment changes will have before implementation takes place. Through CFD visualization, simulation and analysis, PTS’s consulting team can predict the impact of operational changes on the temperatures in the room. From there PTS is able to make recommendations for avoiding potential problems while planning for future growth. As part of the CFD modeling process, PTS maintains a complete asset inventory log as well as a detailed change order log, ensuring that infrastructure changes are tracked correctly.

If you’re interested in learning more about this data center consulting service, please visit our Computational Fluid Dynamic (CFD) Services page.

Request a Quote

To request a quote for PTS's CFD Baseline and/or Maintenance Services, please send an email to CFD@PTSdcs.com with the following information:

  1. The physical address of the location
  2. The square footage of the computer room to be modeled
  3. The number of server cabinets, racks, and stand-alone pieces of equipment in the computer room
  4. The number of IT infrastructure devices (servers, switches, routers, storage arrays, etc.) the computer room supports

Wednesday, April 11, 2007

Keeping It Clean in the Data Center

Spring is here. It’s the time of year when people throw open the windows, pull out the dust rags and fire up their vacuums for a burst of Spring Cleaning. This annual household ritual serves as a good reminder of the importance of regular cleanings within the data center environment.

Regularly scheduled site cleanings help to keep the data center environment free of dust, dirt and other particulates that can harm your operating systems and create health risks for employees. Particulates circulating within a data center can accumulate and interfere with electronics causing a variety of potential problems, including media errors and data loss.

A good rule of thumb is to schedule data center cleanings on a quarterly basis, or when particulate counts exceed the standards set by ISO 14644-8 or ISO 14644-9. By sticking to this cleaning routine, companies optimize the performance of data center equipment while cutting down on the cost of repairs. When you compare the cost of regular cleaning sessions to the overall financial investment in your data center, it’s a smart buy.

Choosing a Data Center Cleaning Service

Don’t grab a broom and dustpan just yet. While it’s good to clean both houses and data centers on a regular basis, that’s where most of the similarities end. Cleaning a data center is a delicate process that requires the services of highly-trained professionals who know how to safely handle mission critical equipment.

To help you select the right cleaning service, here are some tips:

  • Check the company’s references. In addition to the quality of the service, you want to make sure the company has experience dealing with facilities that are similar to your own.
  • Makes sure the company is insured for damages caused during the cleaning process. If an accident occurs, are you protected?
  • Evaluate the experience and training of the cleaning crew. For instance, are they trained to provide services per the requirements of International Standard ISO 14644?
  • Review the company’s cleaning methods to see if they use HEPA filtration vacuums and chemicals that are safe for use with electronics systems.
  • Be clear about your expectations for the service and establish parameters for cleaning. Will the technicians move equipment? Will they clean the sub-floor or above each rack? Are certain areas off-limits? What’s included in the service?
  • Look for a cleaning service that offers availability that meets your needs. In addition to yearly cleanings, will they be available for daily maintenance activities or in the event of an emergency?

Friday, April 06, 2007

Data Center Cooling: Approaches to Avoid

Data center cooling problems can compromise availability and increase costs. The ideal data center cooling system requires an adaptable, highly-available, maintainable, manageable, and cost effective design.

When working to design an effective data center cooling system, there are a number of commonly deployed data center cooling techniques that should not be implemented. They are:

  • Reducing the CRAC supply air temperature to compensate for hot spots
  • Using cabinet and/or enclosures with either roof-mounted fans and/or under-cabinet floor cut-outs, without internal baffles
  • Isolating high-density RLUs

Reducing CRAC Temperatures

Simply making the air colder will not solve a data center cooling problem. The root of the problem is either a lack of cold air volume to the equipment inlet or it is lack of sufficient hot return air removal from the outlet of the equipment. All things equal, any piece of equipment with internal fans will cool it self. Typically, equipment manufactures do not even specify an inlet temperature. They usually provide only a percentage of clear space the front and rear of the equipment must be maintained to ensure adequate convection.

Roof-mounted cabinet fans

CFD analysis conclusively proves that roof-mounted fans and under-cabinet air cut-outs will not sufficiently cool a cabinet unless air baffles are utilized to isolate the cold air and hot air sections. Without baffles, roof-mounted fan will draw not only the desired hot air in the rear, but also a volume of cold air from the front prior to being drawn in by the IT load. This serves only to cool the volume of hot air which we have previously established as a bad strategy. Similarly, providing a cut-out in the access floor directly beneath the cabinet will provide cold air to the inlet of the IT loads, however, it will also leak air into the hot aisle. Again, this only serves to cool the hot air.

Isolating high-density equipment

While isolating high-density equipment isn’t always a bad idea, special considerations must be made. Isolating the hot air is in fact, a good idea. However, the problem is in achieving a sufficient volume of cold air from the raised floor. Even then, assuming enough perforated floor tiles are dedicated to provide a sufficient air volume, too much of the hot air re-circulates from the back of the equipment to the front air inlet and combines with the cold air.

For more information on data center cooling, please download my newest White Paper, Data Center Cooling Best Practices, at http://www.ptsdcs.com/white_papers.asp. You can also view additional publications such as the following at our Vendor White Papers page:

Thursday, March 22, 2007

The New Data Center Boom

Across the country, data center development is booming. Companies, including major players like Microsoft and Google, are buying up acres of land with the intent of building new data centers.

This rapid growth is, at least in part, spurred by the requirements of the Sarbanes-Oxley Act and the Health Insurance Portability & Accountability Act (HIPAA), which call for better handling and storage of data. Companies are also responding to the nationwide push to establish energy efficient data centers. In order to accommodate the state-of-the-art, next generation data centers, companies simply need more space than their current facilities can provide.


Data Center Site Selection

For companies seeking to develop a new data center facility, high-quality site selection is of the utmost importance. By choosing a site location wisely, companies can save both time and money, while achieving scalability, flexibility and high availability.

Choosing a site that minimizes the natural and man made threats to continuous operation is the first step in provisioning a new data center. There are many factors to consider, including:

  • Natural Hazard Threats
  • Physical Location Threats
  • Terrorist Activity Threats
  • Environmental Contamination Threats
  • Site Accessibility
  • Amenities Access

It is interesting to note that the priority level of these factors is highly changeable. For instance, a decade ago it would have been more common for companies to seek site locations that with close proximity to major cities and airports. However, in the wake of September 11th, data centers are more likely to spring up in smaller cities, reducing the likelihood of damage from terrorist attacks, but most especially in those areas of the country that have the lowest operation costs including utility rates, land acquisition costs, labor rates, tax rates, and cost-of-living expenses.

To help navigate the complex process of site selection, many companies employ data center consultants for assistance in selecting an appropriate geography on which to locate their data center. Site selection services are the optimal way to ensure your mission critical facility is set up in both a location and a building that can support constant availability.

Thursday, February 22, 2007

Reducing Data Center Power Consumption

When it comes to data center power, less is clearly more. By reducing the amount of energy their data centers consume, companies can take a burden off electricity suppliers, protect the environment and increase their profits.

Many in the data center industry have already seen the light when it comes to reducing power usage. Technology companies are developing more efficient hardware, researchers are re-evaluating the possibility of converting to DC-power, electricity companies are offering financial incentives for data centers that significantly reduce their energy use, and corporations are revamping their data centers for maximum power efficiency.

This past December, Congress lent further support to the movement to reduce data center power consumption when it passed H.R. 5646 into law. The legislation calls upon the Environmental Protection Agency (EPA) to analyze the consumption of data center power by the federal government and private enterprises.

According to a report by Eric Bangeman of Ars Technica:

“The EPA’s study will fall under the auspices of its Energy Star program, which promotes the use of energy-efficient products. As part of the investigation, it will also consider incentives to encourage the deployment of more energy-efficient hardware in data centers.” The new legislation will help to raise awareness of data center power consumption and will spur the development of additional energy-saving solutions.

The government’s support of energy-efficient data centers creates a winning situation for everyone involved. The increased availability of Energy Star-rated technology, introduction of government-backed incentive programs, and growing public support for energy conservation make the decision to switch to energy-efficient technology an easy one.

Tuesday, February 20, 2007

Computer Room Design Tips

Computer rooms are an important component of the overall data center environment. Their purpose is to shelter network and server infrastructure as well as their related cabling, otherwise known as the computer room’s critical load.

In creating a secure and efficient computer room design, special consideration must be given to good planning and the implementation of the right technologies. The success of your design is dependent on the long-term scalability, flexibility and availability of your facility. Here are some computer room design tips to help your business optimize network performance, achieve its long-term availability goals and avoid costly problems in your computer room:

Power
In any mission critical environment, it’s important to provide adequate, scalable power for the load. Comprehensive load studies can produce a reasonable estimate of your facility’s power requirements. Once you’ve assessed the power needs of your computer room, conceptual and detailed planning can go forward.

Cooling
To design a computer room cooling system that operates effectively, you need a firm understanding of the amount of heat produced by the equipment contained in the enclosed space, along with the heat produced by other heat sources, such as conduction from adjacent spaces. Be sure to account for factors such as ceiling height, access floor depth, equipment layout and overall heat load.

Scalability
The design and construction of your computer room should meet the current technological needs of your business, while allowing for expansion along with the changing technology and business landscape. The use of modular systems, where the characteristics of the modules are known and the steps to add more modules are simple, is an excellent strategy to address growth without major disruptions.

Redundancy
High-availability is accomplished by providing redundancy for all, major and minor, systems, thereby eliminating single points of failure. By installing additional resources for system redundancy, hardware upgrades can be handled without fear of network failures. Incorporate redundant systems into your initial computer room design and continue to do so as your facility expands or upgrades its technology.

Monitoring
After your computer room is complete, the job of monitoring the IT and support infrastructure begins. Computer room monitoring is the vital last line of defense in achieving a high availability environment. When evaluating monitoring systems, look for solutions that are cost effective, easy-to-use, designed with intuitive alarming and escalation methodologies, and built to provide robust reporting all from a central, secure, locations.

Friday, January 26, 2007

Server Room Security Measures

The other day I was reading a news story about hollow coins being used for espionage and it inadvertently got me thinking about server room security issues. While I’m still not 100% sure of the best way to protect your facility against Canadian spy coins, I am aware of a number of techniques for guarding against unauthorized server room access.

To reduce downtime from accidents or sabotage due to the presence of unnecessary or malicious people, it’s important to implement server room security measures that account for a wide variety of potential threats. Whether building a new facility or renovating an old one, you’ll want to begin by mapping out your server room and identifying its most vulnerable areas. These may include access points, sensitive IT equipment and critical elements of the physical infrastructure.

Controlling Access to the Server Room

Server room security begins with controlling access to your facility. Security cards, biometrics and other auditable methods are commonly used to limit who is able to gain entry into the server room, but these methods can only do so much. Security cards, keys or passwords can fall into the wrong hands, while biometrics devices are expensive and may accidentally keep out people who should have access.

If these were your only options, it would be a tradeoff between lower security with convenience and higher security with hassles. By pairing either of these methods with backups such as IP-based camera surveillance, security guards or dry contact sensors, your server room is much better protected. Rather than relying on one strategy, a combination of security measures will provide the best result, particularly if they grow more stringent as you move toward the heart of the facility. By combining methods, you increase reliability.

Reinforcing Physical Infrastructure

From the ground up, the physical infrastructure of your facility should also contribute to your server room security. It pays to incorporate architectural and construction features that discourage or thwart intrusion. For example, make sure the walls of your server room extend past the ceiling, to the roof, to eliminate potential break-in points.

Reinforcing the physical infrastructure of your facility does more than just protect mission-critical IT equipment from theft or sabotage; it also gives protection to HVAC systems, power generators and fire suppression systems – anything that, if compromised, could result in downtime.

Securing IT Equipment

In addition to network security measures, it is important to implement physical security for IT equipment. Within the server room, rack-level security is a top concern. Rack locks defend against unauthorized access to critical equipment by limiting who can touch what. Not only does this help prevent sabotage, it also reduces the number of accidents and mistakes caused by workers interacting with technology that they may not be qualified to use.

Choosing a Security Solution

Every facility has its own unique security needs. When designing a security plan for your server room, carefully weigh your options. The goal is to find an acceptable compromise between security and its expense. By combining an assessment of risk tolerance with an analysis of available technologies and access requirements, it is possible to find an affordable, effective solution that will be accepted by users.

Thursday, January 18, 2007

Data Center Power Solutions

I receive a lot of questions from clients seeking solutions to their problems with data center power consumption. It seems that the higher energy costs rise, the more power the average data center needs. Overall costs for data center power may be skyrocketing, but there are ways to mitigate the expense. Here are some interesting suggestions I’ve come across lately:

- Utilize virtualization software
Virtualization technology is being trumpeted by many as a great way to get more bang for your data center buck. By means of virtualization you can reduce the number of servers that are required to run your applications, thereby increasing the operational efficiency of your data center.

This has become such a hot option that many manufacturers, including Intel, are now building virtualization capabilities into their chips. Companies such as Pacific Gas & Electric Co., which provides electricity to northern California, are jumping on virtualization as an opportunity to cut energy usage by offering financial kickbacks to data centers that save power after implementing virtualization technology.

- Switch from AC-power to DC-power
The idea of running data centers on DC-power isn’t a new one. Since DC requires fewer conversions than AC, there’s great potential for energy savings – some researchers predict a 10 to 20 percent reduction in power costs. Making the switch seems like a no-brainer, right? Unfortunately, higher engineering and technology costs keep DC-power from really catching on. It’s estimated that the cost for DC compatible equipment can climb up to 40 percent higher than that of AC-based technology.

- Install multi-core processors
Multi-core processors can give your data center’s hardware, as well as its energy savings, a boost. Multi-core processors may use slightly more power than a standard processor, but they run faster. This means one multi-core can do the job of several individual processors, reducing the amount of equipment and energy needed to get the job done.

- Optimize your cooling systems
Data center power and cooling go hand-in-hand. To optimize your CRAC systems and reduce energy consumption, focus on adjusting your air flow to eliminate hotspots. A/C units run most efficiently when operating at approximately 80% capacity and when they’re fed the hottest air. If you introduce additional cooling equipment without first trying to improve the efficiency of your existing setup, you’re doing your data center a disservice.

- Have a “Meeting of the Minds” between IT and Facilities Management
People in data center facilities management often complain that the IT team doesn’t properly consult them before purchasing new equipment, which leads to issues with data center power and cooling efficiency. The facilities management team has an intimate knowledge of the data center’s power and cooling infrastructure. By tapping into the combined experience of both departments, companies can sidestep potential energy wasters and keep the data center running at optimal efficiency.

For more solutions regarding data center power or other topics, please visit our Vendor White Paper archive.