Wednesday, May 20, 2009

How Big Should Large Screen Displays Be In Your Command & Control Room?

Many A/V planners are challenged with how big large screen displays should be in their command & control rooms. There are actually some fairly complicated calculations that can be done which will help you determine what the minimum character size (sometimes referred to as 'x' size) should be under a given circumstance. This 'x-size' is defined as being the height of the smallest coherent element within the presented material. Think of this in terms of a lower-case letter x.

This lower case 'x' - which really is the same height as the smallest of the lower case letters - should subtend not less than 10 arc minutes on a viewers retina to be recognized at any viewing distance. This becomes more complicated when viewers are located off axis to the center of the screen as this requires a larger subtended angle and there is some affect as a result of colored symbols, amount of time the image is on screen, etc. As you can imagine... if you were sizing a screen for projection of a spreadsheet in order to go order to review your Data Center metrics you might want to use these calculations (which can be found in this ICIA publication)

Or a good free presentation on this subject can be found at:

Tuesday, May 05, 2009

Should New York Stock Exchange be hiding the location of its new Data Center?

I find it interesting that major financial institutions & government agencies attempt to hide the locations of their Data Centers. How effective can this non-disclosure aspect of security really be in today's media frenzy world? Obviously not too effective since NYSE's new Data Center build is already being talked about in Data Center Knowledge & The Bergen Record.

Even if details of this site location go unpublished, word from employees & vendors who support this site certainly will spread. I'm not saying that we should broadcast in neon lights the location of this Data Center, but if a new Data Center is constructed covering all 4 disciplines of security {Physical, Operational, Logical & Structural} POLS, will it matter if the public knows where the Data Center is if the security is thoroughly covered. It isn’t likely that the NYSE can really hide the whereabouts of its ~400K square foot Data Center anyway. Most Data Center designers cover Physical & Logical security systems thoroughly as those disciplines are maturing. What is often not covered thoroughly is the Structural Security, organizations become too focused on getting a CO and getting the new Data Center live that they often don’t cover themselves from the structural threats of fire, water, theft & wind.

How many Data Centers are built with a 20 minute fire rated door? How many Data Centers are built with more than a 10-15 minute Class 125 rating? The real interesting aspect to this point is that there are new building materials that can cover Structural Security and omit these unnecessary exposures while actually constructing the facility & obtaining the CO faster.

Thursday, April 30, 2009

Free Data Center Assessment Tools from the Department of Energy.

It certainly shows where we are at in this country when the Government is creating free tools to help us access our efficiency and giving us guidance on how to improve our Data Center Efficiency. What choice does the DoE have with the rising demand for power from our Data Centers expected to be 10% of the total US demand for power by 2011 while we have a growing need to reduce our carbon footprint & demand on fossil fuels.

In my opinion, a couple areas of caution are warranted in the use of these free tools. First the tool is free, but you still have to have the means to collect the data to enter into the tool, details about the power consumption of your equipment & whether the equipment can be controlled, utility bills, temperature readings at rack inlet & on supply return, airflow readings, etc. The presentation & guidance suggests that we can use air side & water side economizers, decrease our airflow, raise our water temperature & set points for supply side air without even discussing the impacts this could have on availability? The guidance for use of the tools discusses the use of thermography or CFD, but treats it as a suggested option in our analysis of improving DCiE while we are raising temperatures & decreasing airflow. These tools do present value & they are free. I just wish our Government would have stressed the tools limitations & cautioned users on other considerations that must be factored, such as the availability requirements of your Data Center.

Wednesday, April 15, 2009

How important is it to consider the Grid for my back-up data center & DR Plan?

It has been several years since the August 2003 Blackout, but I can't help thinking that we are all being lulled to sleep on the next major grid issue. There are only 3 main power grids in the US so if I have my primary Data Center on the Eastern Interconnect then should my DR requirement be to locate my back-up site in TX on the ERCOT Grid or in the west on the WSCC Grid. Or is there any benefit to locating on a different NERC region in which case there are 10 regions in the US. Can that benefit equivalent to being on a separate grid? I would doubt it since the 2003 Blackout crossed multiple NERC regions in the Eastern Grid.

Should I not be concerned with this & just choose a site or build a site with a higher level of redundant & back-up power? Is it more important to have the DR site in a location easily accessible for our technical experts than it is to have it on a different grid? Remember 911 grounded flights so if we had another event of that magnitude it would take days for my technical experts to get to our DR site if they could at all. Of course we can introduce many tools for full remote control & power control where access to our physical environment becomes less important so should I make it best practice to get that DR site on a separate grid? If I put locating my DR site into my key design criteria where should it fall on my priority list?

Monday, April 13, 2009

Going Green with Data Center Storage

Just saw an interesting article in Enterprise Strategies about the use of magnetic tape as an energy-efficient storage solution. In “Tape’s Role in the Green Data Center,” Mark Ferelli discusses how tape technology is making a comeback by helping to keep the data center green as utility bills rise. He explains:

The efficient use of disk can help with data center greening when a user reads and writes to the densest possible disk array to ensure capacity is maximized and more disk is not bought unnecessarily.

In archiving, on the other hand, the greenest option is tape, which uses less power and produces a lower heat output. This not only eases the bite of the utility bill but places less strain on HVAC systems. In contrast, the case can be made that using disk for archiving does more harm since disks that spin constantly use much more power and generate more heat.

Ferelli also takes a look at alternative power and cooling solutions, such as MAID (Massive Array of Idle Disks) storage arrays, in comparison with tape-based storage.

What’s been your experience with energy-efficient storage technology? Do tape-based systems offer better power savings versus disk-based solutions?

Friday, April 03, 2009

Google Unveils Server with Built-in Battery Design

For the first time on Wednesday, Google opened up about the innovative design of its custom-built servers.

The timing of the reveal, which coincided with April Fool’s Day, left some wondering if the earth shattering news was a prank. If it sounds too good to be true, it probably is, right? Not so in this case. In the interest of furthering energy efficiency in the industry, Google divulged that each of its servers has a built-in battery design. This means that, rather than relying on uninterruptible power supplies (UPS) for backup power, each of Google's servers has its own 12-volt battery. The server-mounted batteries have proven to be cheaper than conventional UPS and provide greater efficiency.

Google offered additional insights into its server architecture, its advancements in the area of energy efficiency, and the company’s use of modular data centers. For the full details, I recommend reading Stephen Shankland’s coverage of the event at CNET News. It’s fascinating stuff. Plus, Google plans to launch a site in a few days with more info.

Thursday, April 02, 2009

Can The Container Approach Fit Your Data Center Plans?

Conventional Data Center Facilities have now had a long history of difficulties in keeping up with the increasing demands of new server & network hardware so organizations are now looking for solutions that upgrade the facility with the technology upgrade, rather than continuing to invest millions in engineering & construction upgrades to support higher densities, the expense of having to build or move to new facilities that can handle these densities. Containers offer a repeatable standard building block. Technology has long advanced faster than facilities architecture and containerized solutions at least levels a large portion of the facility advance to the technology advance.

So why haven't we all already moved into Containerized Data Center Facilities and why are so many new facilities underway that have no plans for containers? Hold on Google just revealed for the first time that since 2005, its data centers have been composed of standard shipping containers--each with 1,160 servers and a power consumption that can reach 250 kilowatts. 1st Google showed us all how to better use the internet, now have they shown us all how to build an efficient server & Data Center? The container reduces the real estate cost substantially, but the kW cost only marginally, Google really focused its attention on efficiency savings at the server level, bravo! The weak link in every data center project will always remain the ability of the site to provide adequate redundant capacity emergency power & heat rejection. These issues do not go away in the container ideology. In fact, it could be argued that the net project cost in the container model could be greater since the UPS's & CRAC units often are located within the container, which will cause the overall count of them to be greater. Just as in any Data Center project rightsizing the utility power, support infrastructure & back up power to meet the short & long term goals of your key design criteria is the most important aspect to consider in any containerized project. What containers do accomplish is creating a repeatable standard & footprint for the IT load and how the power, air & communications are distributed to it. Organizations are spending billions of dollars planning & engineering those aspects in many cases to find out their solution is dated by the time they install their IT load. With containers when you upgrade your servers you are upgrading your power, air & communications simultaneously & keeping it aligned with your IT load.

What about the small & medium business market? Yes the containerized approach is a very viable alternative to a 100,000+ square foot conventional build, but what about the smaller applications? A container provides an all encompassing building block for technology & facility architecture but in a fairly large footprint. Not everyone has a need for 1400U's of space, 22,400 processing cores or the wherewithal to invest over $500K per modular component. Unless SMB's want to colocate or sign off to a managed service provider who is running their IT in a cloud in a new containerized Data Center, the container approach doesn't have a play for SMB or does it? There are certainly solutions in the market to help a SMB build their own smaller footprint high density movable enclosure or mini-container, it’s surprising there has been little focus on that much larger market. We are exploring some containerized approaches to the SMB market that would also address branch & division applications for large organizations where the container offerings today likely present too large a building block to be practical.

For more information about Containerized Data Centers & some of the methodologies for deployment I recommend Dennis Cronin's article in Mission Critical Magazine.

And certainly the details on CNET about Google's Containers & Servers.

Wednesday, April 01, 2009

Data Center Power Drain [Video]

Click here to watch a recent news report from on what's being done to make San Francisco's data centers more energy efficient.

In the "On the Greenbeat" segment, reporter Jeffrey Schaub talks to Mark Breamfitt at PG&E and Miles Kelley at 365 Main about how utilities companies and the IT industry are working to reduce overall energy consumption. According to the report, each of 365 Main’s three Bay Area data centers uses as much power as a 150 story skyscraper, with 40 percent of that power used to cool the computers.

Wednesday, March 25, 2009

Spending on Data Centers to Increase in Coming Year

An independent survey of the U.S. data center industry commissioned by Digital Realty Trust indicates that spending on data centers will increase throughout 2009 and 2010.

Based on Web-based surveys of 300 IT decision makers at large corporations in North America, the study reveals that more than 80% of the surveyed companies are planning data center expansions in the next one to two years, with more than half of those companies planning to expand in two or more locations.

In addition, the surveyed companies plan to increase data center spending by an average of nearly 7% in the coming year. “This is a reflection of how companies view their datacenters as critical assets for increasing productivity while reducing costs," noted Chris Crosby, Senior Vice President of Digital Realty Trust.

To view the rest of the study findings, visit the Investor Relations section of

Thursday, March 19, 2009

Top 3 Data Center Trends for 2009

Enterprise Systems just published the “Top Three Data Center Trends for 2009” by Duncan Campbell, vice president of worldwide marketing for adaptive infrastructure at HP. In the article, Campbell discusses how companies need to get the most out of their technology assets and, in the coming year, data centers will be pressured to "maintain high levels of efficiency while managing costs". In addition, companies will need to make an up-front investment in their data center assets in order to meet complex business demands.

Campbell predicts:
  • “There will be no shortage of cost-cutting initiatives for enterprise technology this year.”
  • “As virtualization continues to enable technology organizations to bring new levels of efficiency to the data center, the line between clients, servers, networks and storage devices will continue to blur.”
  • “Blade offerings will continue to mature in 2009. Server, storage, and networking blades will continue to improve their energy efficiency and reduce data center footprints. Vendors are also now developing specialty blades, finely tuned to run a specific application.”

Efficiency, agility, and scalability will remain priorities for companies. By taking advantage of innovative data center technologies, companies can further reduce costs while increasing productivity – a goal that is of particular importance during challenging economic times.