Showing posts with label data center cooling. Show all posts
Showing posts with label data center cooling. Show all posts

Wednesday, February 03, 2021

Webinar: The Data Center Reimagined – Lower Cost to Build and Operate Your Facility

 

Webinar: The Data Center Reimagined – Lower Cost to Build and Operate Your Facility

It’s vital for companies to invest in data center operations that are simple in design, easy to operate, and minimize infrastructure needed to achieve data resiliency. By doing so, companies can realize energy efficiency and resiliency by building and operating mission-critical data centers that cost less, can be deployed faster, are easily scalable, and perform better.

In this webinar, Peter Sacco of PTS Data Center Solutions offers groundbreaking views for data center operators pursuing solutions to key operating challenges. Leveraging advanced technology like fuel cells, microgrids, and indirect evaporative cooling, PTS offers revolutionary solutions to cost and resiliency challenges.

REGISTER NOW

Key takeaways from the event to include:

    • Balanced approach to cost, performance, speed of deployment and ease of management
    • Complexity reduction in both data center facility and IT systems
    • Step change reductions in CAPEX and OPEX without sacrificing resiliency
    • A data-centric workload strategy focused on latency, cost, security, application performance, platform reliability, and regulation

Date: Wednesday, February, 10, 2021
Time: 11:00 AM to 12:30 PM Eastern Standard Time

REGISTER NOW

Wednesday, May 31, 2017

Is your Data Center As Cool as it Once Was?

Trends in data center cooling have changed dramatically over the past few years. Outmoded “maximum cooling” approaches have given way to more efficient heat-rejection systems. Utilizing sophisticated economizer technologies, modern cooling systems are not only reducing energy costs, but are also operating far more efficiently, and are highly reliable.
Other recent improvements in data center cooling technology have spawned different perspectives in addressing the use of chilled water vs pumped refrigerant vs air-cooling. Depending on the business needs, a company’s budget, and the geography, current cooling approaches may introduce waterless systems that may prove more suitable than say, a chilled water system.
The days of keeping the data center below 55 degrees are likely over. Old standards are being challenged and, in many cases, operating a warmer data center will be just as reliable, more efficient, and it will certainly be cheaper.  And that will make your data center much “cooler” than it used to be.

Thursday, February 16, 2017

Data Center Cooling Advancements Let you Leave Your Jacket at Home

Data Center Cooling
Data centers that used to be 55 degrees are now running comfortably at 75 degrees, which means your company is saving money, and your IT pros no longer need to bundle up to do their jobs.

Pete Sacco, President and Founder of PTS Data Center Solutions, Inc., was interviewed for a recent TechRepublic article on data center cooling:
 "...the perception is that cooler is better.
The answer is that's not the case"
  
- Pete Sacco, President, PTS Data Center Solutions, Inc.

Wednesday, July 24, 2013

Application of Nanotechnology to Cool a Data Center

Nansulate CrystalAs the latest heat wave in the Northeast passes, PTS Data Center Solutions engineers came across an interesting use of nanotechnology to overcome heating problems within a data center.

Mexico's Social Security and Health Administration (IMSS) had an issue with its data center in Monterrey, Mexico which holds patient medical records. Even though their data center was air conditioned, the heat coming through the roof during the summer raised the data center temperature enough to cause the servers to automatically shut down to prevent heat damage.

Nansulate® insulating coating, a patented, award winning clear coat technology, was used on the roof of the Cenati Data Center, applied at a 3-coat coverage. After application of Nansulate® effectively reduced the data center temperature to a safe level for the servers. The coating insulated the roof from excess heat transfer and stopped the server shut down due to high temperatures.

Read the Full Nansulate® Crystal Case Study

Nansulate Crystal Case Study

Tuesday, August 30, 2011

Selecting the Optimal Data Center Cooling Solution

Pete Sacco, President & CEO, of PTS Data Center Solutions authored an interesting article on the most effective way to select an Optimal Data Center Cooling Solution.

A better way to think about data center cooling is to forget the notion of adding 'cold' to a room. Rather, think about air conditioning as removing heat from the room. Selecting the optimal cooling solution involves a deep understanding and comparison between the performance characteristics, capital expense (CAPEX), and operational expense (OPEX) of each potential configuration.

The article provides a deep dive into:
  • Establishing Suitable Design Criteria for Your Data Center Requirement

  • Reviews Leading Computer Room Air Conditioning Approaches

  • Provides an Overview of the Role Played by a Data Center Design Consultant

Read the entire article by clicking here or learn more by contacting PTS.

To learn more about PTS services and solutions for data center cooling needs, click here.

Monday, July 16, 2007

PTS Weighs in on Data Center Humidity Issues

Mark Fontecchio’s recent article on data center humidity issues at SearchDataCenter.com not only created buzz in the data center blogs, but generated quite a discussion amongst our team at PTS Data Center Solutions.

Data center humidity range too strict?

While some data center professionals find the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE)’s recommended relative humidity of 40% to 55% to be restrictive, I think the tight ASHRAE standards have to be adhered to until further research proves otherwise.

PTS’s engineering manager, Dave Admirand, PE notes that the reason the telecommunications industry is able to operate within a wider humidity range of 35% to 65% is because of their very strict grounding regimen. In a well grounded system an electrical charge has no place build up and is more readily dissipated to ground. Mr. Admirand recalls his days at IBM when the ‘old timers’ would swear by wearing leather soled shoes (conductive enough to make connectivity to the grounded raised floor) and/or washing their hands (presumably to carry dampness with them) prior to entering their data centers to avoid a shock from discharging the build up on themselves onto a surface.

Relative humidity vs. absolute humidity

While I think both relative and absolute humidity should be considered, many in the industry are still designing to and measuring relative humidity. PTS mechanical engineer John Lin, PhD points out that only two values of air are independent and data center professionals have to control the air temperature. While we can only control one of the humidity values, it is possible to calculate the absolute humidity (humidity ratio) based on air temperature and relative humidity. Therefore, data centers are fine as long as the both temperature and relative humidity are within the permissible range.

Coy Stine’s example is right on the mark. The high temperature delta between inlet and outlet air that can be realized in some dense IT equipment may lead to some very low humidity air inside critical electronics which can lead to electrostatic discharge (ESD). My experience, however, is that I am not encountering data loss scenarios at the estimated 50-100 data centers I visit each year simply due to ESD concerns. This leads me to believe that there is a slight tendency to ‘make mountains over mole hills’ regarding ESD.

After further reflection on Stine’s scenario about the low relative humidity air at the back of the servers, I was reminded again by Mr. Admirand that it won’t much make a difference since that air is being discharged back to the CRAC equipment. Furthermore, even if the air is recirculated back to the critical load’s inlet the absolute moisture content of the air remains constant and the mixed air temperature is not low enough to cause a problem. John Lin contends this is the reason why we only control temperature and relative humidity.

It’s been our stance at PTS that the most important goal of humidity control is to regulate condensation. The only real danger to very warm, high moisture content air is that it will condense easily should its temperature drop below the dew point temperature.

Separate data center humidity from cooling units?

I have no doubt that R. Stephen Spinazzola’s conclusion that it is cheaper to operate humidity control as a stand-alone air handler is on target. However, experience dictates the approach is an uphill sell since the savings are indirect, realized only as part of operational savings. The reality is that the upfront capital cost is greater to deploy these systems, especially in a smaller environment where it is harder to control humidity anyway.

Humidity control is very dependent on the environment for which you are designing a system. In a large data center, it is actually easier to do because most of the entire building is presumably data center controlled environment. However, for SMEs with tenant space computer rooms the idea of humidity control is much more difficult since it is dictated by the overall building humidity environment. At best, a computer room is a giant sponge – the question is whether you are gaining from or giving off water to the rest of the building.

The design and construction of a data center or computer room, including its cooling system, should meet the specific environmental needs of its equipment. For now, our approach at PTS Data Center Solutions has been to utilize humidity control within DX units for both small and large spaces. Conversely, we control humidity separately when deploying in-row, chilled water cooling techniques for higher density cooling applications in smaller sites.

For more information on data center humidity issues, read “Changing Cooling Requirements Leave Many Data Centers at Risk” or visit our Computer Room Cooling Systems page.

Tuesday, May 15, 2007

High Density Devices Strain Data Center Resources

A few weeks back I commented on the current boom in data center development. Spurring this trend is the growing need for greater processing power and increased data storage capacity, as well as new Federal regulations which call for better handling and storage of data.

In the scramble to keep up with these demands, the deployment of high density devices and blade servers has become an attractive option for many data center managers. However, a new report from the Aperture Research Institute indicates that “many facilities are not able to handle the associated demand for power and cooling.”

The study, based on interviews with more than 100 data center professionals representing a broad spectrum of industries, reveals that the deployment of high density equipment is creating unforeseen challenges within many data centers.

Highlights of the report include:

  • While the majority of data center managers are currently running blade servers in their facilities, traditional servers still comprise the bulk of new server purchases. Mixing blade and non-blade servers in such small quantities can unnecessarily complicate the data center environment and make maintenance more difficult.
  • The rising power density of racks makes them more expensive to operate and more difficult to cool. More than one-third of the respondents said their average power density per rack was over 7KW, a scenario that setting the facilities up for potential data center cooling issues and unexpected downtime.
  • Respondents report that the majority of data center outages were caused by human error and improper failover.
  • What’s really jaw-dropping is that while more than 22% of outages were due to overheating, 21% of respondents admit that they don’t know the maximum power density of their racks. The report points out that “[o]ver 8% of respondents are therefore using high-density devices without tracking power density in a rack, dramatically increasing the potential for outages.”

High density equipment can help data centers keep up with business demands, but only if you can keep things running smoothly. Proper management of power and cooling is essential for meeting the end user's availability expectations. For more information on the various cooling challenges posed by high density rack systems, please visit our Data Center Cooling Challenges page at PTSDCS.com.

Friday, April 06, 2007

Data Center Cooling: Approaches to Avoid

Data center cooling problems can compromise availability and increase costs. The ideal data center cooling system requires an adaptable, highly-available, maintainable, manageable, and cost effective design.

When working to design an effective data center cooling system, there are a number of commonly deployed data center cooling techniques that should not be implemented. They are:

  • Reducing the CRAC supply air temperature to compensate for hot spots
  • Using cabinet and/or enclosures with either roof-mounted fans and/or under-cabinet floor cut-outs, without internal baffles
  • Isolating high-density RLUs

Reducing CRAC Temperatures

Simply making the air colder will not solve a data center cooling problem. The root of the problem is either a lack of cold air volume to the equipment inlet or it is lack of sufficient hot return air removal from the outlet of the equipment. All things equal, any piece of equipment with internal fans will cool it self. Typically, equipment manufactures do not even specify an inlet temperature. They usually provide only a percentage of clear space the front and rear of the equipment must be maintained to ensure adequate convection.

Roof-mounted cabinet fans

CFD analysis conclusively proves that roof-mounted fans and under-cabinet air cut-outs will not sufficiently cool a cabinet unless air baffles are utilized to isolate the cold air and hot air sections. Without baffles, roof-mounted fan will draw not only the desired hot air in the rear, but also a volume of cold air from the front prior to being drawn in by the IT load. This serves only to cool the volume of hot air which we have previously established as a bad strategy. Similarly, providing a cut-out in the access floor directly beneath the cabinet will provide cold air to the inlet of the IT loads, however, it will also leak air into the hot aisle. Again, this only serves to cool the hot air.

Isolating high-density equipment

While isolating high-density equipment isn’t always a bad idea, special considerations must be made. Isolating the hot air is in fact, a good idea. However, the problem is in achieving a sufficient volume of cold air from the raised floor. Even then, assuming enough perforated floor tiles are dedicated to provide a sufficient air volume, too much of the hot air re-circulates from the back of the equipment to the front air inlet and combines with the cold air.

For more information on data center cooling, please download my newest White Paper, Data Center Cooling Best Practices, at http://www.ptsdcs.com/white_papers.asp. You can also view additional publications such as the following at our Vendor White Papers page: