Friday, June 05, 2009

Data Center Professionals Network

The other day I stumbled across the Data Center Professionals Network, a free online community for professionals from around the world who represent a cross section of the industry. Members include data center executives, engineering specialists, equipment suppliers, training companies, real-estate and building companies, colocation and wholesale businesses, and industry analysts. The recently launched networking site enables key players in the industry to easily connect, interact, and develop business opportunities.

According to Maike Mehlert, Director of Communications:

The Data Center Professionals Network has been set up to be a facilitator for doing business. It acts as a one-stop-shop for all aspects of the data center industry, from large corporations looking for co-location or real estate, or data centers looking for equipment suppliers or services, to engineers looking for advice or training.


Features of the social network include a personalized user profile, as well as access to job boards, business directories, press releases, classified ads, white papers, photos, videos and events.

I haven’t had a chance to join yet but if you want to check it out, visit http://www.datacenterprofessionals.net/ (you can sign in using a Ning ID if you already have one). If you do visit the site, post a comment and let me know what you think.

Wednesday, May 20, 2009

How Big Should Large Screen Displays Be In Your Command & Control Room?

Many A/V planners are challenged with how big large screen displays should be in their command & control rooms. There are actually some fairly complicated calculations that can be done which will help you determine what the minimum character size (sometimes referred to as 'x' size) should be under a given circumstance. This 'x-size' is defined as being the height of the smallest coherent element within the presented material. Think of this in terms of a lower-case letter x.

This lower case 'x' - which really is the same height as the smallest of the lower case letters - should subtend not less than 10 arc minutes on a viewers retina to be recognized at any viewing distance. This becomes more complicated when viewers are located off axis to the center of the screen as this requires a larger subtended angle and there is some affect as a result of colored symbols, amount of time the image is on screen, etc. As you can imagine... if you were sizing a screen for projection of a spreadsheet in order to go order to review your Data Center metrics you might want to use these calculations (which can be found in this ICIA publication) http://www.infocomm.org/cps/rde/xchg/infocomm/hs.xsl/9229.htm

Or a good free presentation on this subject can be found at: http://www.educause.edu/Resources/DesignStandardsandPracticesfor/155327

Tuesday, May 05, 2009

Should New York Stock Exchange be hiding the location of its new Data Center?

I find it interesting that major financial institutions & government agencies attempt to hide the locations of their Data Centers. How effective can this non-disclosure aspect of security really be in today's media frenzy world? Obviously not too effective since NYSE's new Data Center build is already being talked about in Data Center Knowledge & The Bergen Record.

http://www.datacenterknowledge.com/archives/2009/05/04/financial-data-center-hiding-in-plain-sight/comment-page-1/#comment-3718

Even if details of this site location go unpublished, word from employees & vendors who support this site certainly will spread. I'm not saying that we should broadcast in neon lights the location of this Data Center, but if a new Data Center is constructed covering all 4 disciplines of security {Physical, Operational, Logical & Structural} POLS, will it matter if the public knows where the Data Center is if the security is thoroughly covered. It isn’t likely that the NYSE can really hide the whereabouts of its ~400K square foot Data Center anyway. Most Data Center designers cover Physical & Logical security systems thoroughly as those disciplines are maturing. What is often not covered thoroughly is the Structural Security, organizations become too focused on getting a CO and getting the new Data Center live that they often don’t cover themselves from the structural threats of fire, water, theft & wind.

How many Data Centers are built with a 20 minute fire rated door? How many Data Centers are built with more than a 10-15 minute Class 125 rating? The real interesting aspect to this point is that there are new building materials that can cover Structural Security and omit these unnecessary exposures while actually constructing the facility & obtaining the CO faster.

Thursday, April 30, 2009

Free Data Center Assessment Tools from the Department of Energy.

It certainly shows where we are at in this country when the Government is creating free tools to help us access our efficiency and giving us guidance on how to improve our Data Center Efficiency. What choice does the DoE have with the rising demand for power from our Data Centers expected to be 10% of the total US demand for power by 2011 while we have a growing need to reduce our carbon footprint & demand on fossil fuels.

In my opinion, a couple areas of caution are warranted in the use of these free tools. First the tool is free, but you still have to have the means to collect the data to enter into the tool, details about the power consumption of your equipment & whether the equipment can be controlled, utility bills, temperature readings at rack inlet & on supply return, airflow readings, etc. The presentation & guidance suggests that we can use air side & water side economizers, decrease our airflow, raise our water temperature & set points for supply side air without even discussing the impacts this could have on availability? The guidance for use of the tools discusses the use of thermography or CFD, but treats it as a suggested option in our analysis of improving DCiE while we are raising temperatures & decreasing airflow. These tools do present value & they are free. I just wish our Government would have stressed the tools limitations & cautioned users on other considerations that must be factored, such as the availability requirements of your Data Center.

http://www1.eere.energy.gov/industry/saveenergynow/partnering_data_centers.html

Wednesday, April 15, 2009

How important is it to consider the Grid for my back-up data center & DR Plan?

It has been several years since the August 2003 Blackout, but I can't help thinking that we are all being lulled to sleep on the next major grid issue. There are only 3 main power grids in the US so if I have my primary Data Center on the Eastern Interconnect then should my DR requirement be to locate my back-up site in TX on the ERCOT Grid or in the west on the WSCC Grid. Or is there any benefit to locating on a different NERC region in which case there are 10 regions in the US. Can that benefit equivalent to being on a separate grid? I would doubt it since the 2003 Blackout crossed multiple NERC regions in the Eastern Grid.

http://www.eere.energy.gov/de/us_power_grids.html

Should I not be concerned with this & just choose a site or build a site with a higher level of redundant & back-up power? Is it more important to have the DR site in a location easily accessible for our technical experts than it is to have it on a different grid? Remember 911 grounded flights so if we had another event of that magnitude it would take days for my technical experts to get to our DR site if they could at all. Of course we can introduce many tools for full remote control & power control where access to our physical environment becomes less important so should I make it best practice to get that DR site on a separate grid? If I put locating my DR site into my key design criteria where should it fall on my priority list?

Monday, April 13, 2009

Going Green with Data Center Storage

Just saw an interesting article in Enterprise Strategies about the use of magnetic tape as an energy-efficient storage solution. In “Tape’s Role in the Green Data Center,” Mark Ferelli discusses how tape technology is making a comeback by helping to keep the data center green as utility bills rise. He explains:

The efficient use of disk can help with data center greening when a user reads and writes to the densest possible disk array to ensure capacity is maximized and more disk is not bought unnecessarily.

In archiving, on the other hand, the greenest option is tape, which uses less power and produces a lower heat output. This not only eases the bite of the utility bill but places less strain on HVAC systems. In contrast, the case can be made that using disk for archiving does more harm since disks that spin constantly use much more power and generate more heat.


Ferelli also takes a look at alternative power and cooling solutions, such as MAID (Massive Array of Idle Disks) storage arrays, in comparison with tape-based storage.

What’s been your experience with energy-efficient storage technology? Do tape-based systems offer better power savings versus disk-based solutions?

Friday, April 03, 2009

Google Unveils Server with Built-in Battery Design

For the first time on Wednesday, Google opened up about the innovative design of its custom-built servers.

The timing of the reveal, which coincided with April Fool’s Day, left some wondering if the earth shattering news was a prank. If it sounds too good to be true, it probably is, right? Not so in this case. In the interest of furthering energy efficiency in the industry, Google divulged that each of its servers has a built-in battery design. This means that, rather than relying on uninterruptible power supplies (UPS) for backup power, each of Google's servers has its own 12-volt battery. The server-mounted batteries have proven to be cheaper than conventional UPS and provide greater efficiency.

Google offered additional insights into its server architecture, its advancements in the area of energy efficiency, and the company’s use of modular data centers. For the full details, I recommend reading Stephen Shankland’s coverage of the event at CNET News. It’s fascinating stuff. Plus, Google plans to launch a site in a few days with more info.

Thursday, April 02, 2009

Can The Container Approach Fit Your Data Center Plans?

Conventional Data Center Facilities have now had a long history of difficulties in keeping up with the increasing demands of new server & network hardware so organizations are now looking for solutions that upgrade the facility with the technology upgrade, rather than continuing to invest millions in engineering & construction upgrades to support higher densities, the expense of having to build or move to new facilities that can handle these densities. Containers offer a repeatable standard building block. Technology has long advanced faster than facilities architecture and containerized solutions at least levels a large portion of the facility advance to the technology advance.

So why haven't we all already moved into Containerized Data Center Facilities and why are so many new facilities underway that have no plans for containers? Hold on Google just revealed for the first time that since 2005, its data centers have been composed of standard shipping containers--each with 1,160 servers and a power consumption that can reach 250 kilowatts. 1st Google showed us all how to better use the internet, now have they shown us all how to build an efficient server & Data Center? The container reduces the real estate cost substantially, but the kW cost only marginally, Google really focused its attention on efficiency savings at the server level, bravo! The weak link in every data center project will always remain the ability of the site to provide adequate redundant capacity emergency power & heat rejection. These issues do not go away in the container ideology. In fact, it could be argued that the net project cost in the container model could be greater since the UPS's & CRAC units often are located within the container, which will cause the overall count of them to be greater. Just as in any Data Center project rightsizing the utility power, support infrastructure & back up power to meet the short & long term goals of your key design criteria is the most important aspect to consider in any containerized project. What containers do accomplish is creating a repeatable standard & footprint for the IT load and how the power, air & communications are distributed to it. Organizations are spending billions of dollars planning & engineering those aspects in many cases to find out their solution is dated by the time they install their IT load. With containers when you upgrade your servers you are upgrading your power, air & communications simultaneously & keeping it aligned with your IT load.

What about the small & medium business market? Yes the containerized approach is a very viable alternative to a 100,000+ square foot conventional build, but what about the smaller applications? A container provides an all encompassing building block for technology & facility architecture but in a fairly large footprint. Not everyone has a need for 1400U's of space, 22,400 processing cores or the wherewithal to invest over $500K per modular component. Unless SMB's want to colocate or sign off to a managed service provider who is running their IT in a cloud in a new containerized Data Center, the container approach doesn't have a play for SMB or does it? There are certainly solutions in the market to help a SMB build their own smaller footprint high density movable enclosure or mini-container, it’s surprising there has been little focus on that much larger market. We are exploring some containerized approaches to the SMB market that would also address branch & division applications for large organizations where the container offerings today likely present too large a building block to be practical.

For more information about Containerized Data Centers & some of the methodologies for deployment I recommend Dennis Cronin's article in Mission Critical Magazine.

http://www.missioncriticalmagazine.com/CDA/Articles/Features/BNP_GUID_9-5-2006_A_10000000000000535271

And certainly the details on CNET about Google's Containers & Servers.

http://news.cnet.com/8301-1001_3-10209580-92.html

Wednesday, April 01, 2009

Data Center Power Drain [Video]

Click here to watch a recent news report from CBS5.com on what's being done to make San Francisco's data centers more energy efficient.

In the "On the Greenbeat" segment, reporter Jeffrey Schaub talks to Mark Breamfitt at PG&E and Miles Kelley at 365 Main about how utilities companies and the IT industry are working to reduce overall energy consumption. According to the report, each of 365 Main’s three Bay Area data centers uses as much power as a 150 story skyscraper, with 40 percent of that power used to cool the computers.

Wednesday, March 25, 2009

Spending on Data Centers to Increase in Coming Year

An independent survey of the U.S. data center industry commissioned by Digital Realty Trust indicates that spending on data centers will increase throughout 2009 and 2010.

Based on Web-based surveys of 300 IT decision makers at large corporations in North America, the study reveals that more than 80% of the surveyed companies are planning data center expansions in the next one to two years, with more than half of those companies planning to expand in two or more locations.

In addition, the surveyed companies plan to increase data center spending by an average of nearly 7% in the coming year. “This is a reflection of how companies view their datacenters as critical assets for increasing productivity while reducing costs," noted Chris Crosby, Senior Vice President of Digital Realty Trust.

To view the rest of the study findings, visit the Investor Relations section of DigitalRealtyTrust.com.