It is 2010 & so many data center & IT managers are still relying on manual derated name plate calculations to manage the power required throughout their power chain even though many of these data centers are on the verge of running out of power & many have experienced outages due to tripped circuits. So many data center & IT managers come to us looking for real-time monitoring of power, many solutions are evaluated, but few ever get implemented. I'm trying to figure out why many are not investing in real-time power management.
If you read the Green Grid's white paper "Proper Sizing of IT Power and Cooling Load" it discusses the fluctuations in IT power draw due to inlet temperature changes, server component changes, virtualization, etc. http://www.thegreengrid.org/en/Global/Content/white-papers/Proper-Sizing-of-IT-Power-and-Cooling-Loads
I don't think we can underestimate the potential danger in using derated nameplate information to calculate power requirements. Unvirtualized servers typically use 15% of the processing power, virtualized we see #'s in the 60-95% range of processing utilization, this directly correlates to #'s closer to nameplate values as the Green Grid pointed out in the white paper. Most IT organizations are rapidly adapting virtualization technology to consolidate and operate more efficiently at the same time, which is a good thing, but it is putting rapid pressure on previously underutilized power infrastructures in data centers.
With so many variables to account for how can one depend on derated calculation tools? With so many real-time tools available to measure & trend power accurately including; branch circuit monitoring, outlet level monitored power strips, in-line power meters, IPMI and extensive software options why are so many still trying to use derated calculations to guesstimate the power they'll need for higher density virtualized deployments? This guesswork leads to potential circuit breaker trips & designed inefficiencies throughout the entire power chain. I am amazed with rising power costs, less power capacity available and so many looking to operate a more efficient "greener" data center footprint that so few are investing in real-time power monitoring tools that will allow them to plan & manage capacity effectively.
If you read the Green Grid's white paper "Proper Sizing of IT Power and Cooling Load" it discusses the fluctuations in IT power draw due to inlet temperature changes, server component changes, virtualization, etc. http://www.thegreengrid.org/en/Global/Content/white-papers/Proper-Sizing-of-IT-Power-and-Cooling-Loads
I don't think we can underestimate the potential danger in using derated nameplate information to calculate power requirements. Unvirtualized servers typically use 15% of the processing power, virtualized we see #'s in the 60-95% range of processing utilization, this directly correlates to #'s closer to nameplate values as the Green Grid pointed out in the white paper. Most IT organizations are rapidly adapting virtualization technology to consolidate and operate more efficiently at the same time, which is a good thing, but it is putting rapid pressure on previously underutilized power infrastructures in data centers.
With so many variables to account for how can one depend on derated calculation tools? With so many real-time tools available to measure & trend power accurately including; branch circuit monitoring, outlet level monitored power strips, in-line power meters, IPMI and extensive software options why are so many still trying to use derated calculations to guesstimate the power they'll need for higher density virtualized deployments? This guesswork leads to potential circuit breaker trips & designed inefficiencies throughout the entire power chain. I am amazed with rising power costs, less power capacity available and so many looking to operate a more efficient "greener" data center footprint that so few are investing in real-time power monitoring tools that will allow them to plan & manage capacity effectively.