Wednesday, April 15, 2009

How important is it to consider the Grid for my back-up data center & DR Plan?

It has been several years since the August 2003 Blackout, but I can't help thinking that we are all being lulled to sleep on the next major grid issue. There are only 3 main power grids in the US so if I have my primary Data Center on the Eastern Interconnect then should my DR requirement be to locate my back-up site in TX on the ERCOT Grid or in the west on the WSCC Grid. Or is there any benefit to locating on a different NERC region in which case there are 10 regions in the US. Can that benefit equivalent to being on a separate grid? I would doubt it since the 2003 Blackout crossed multiple NERC regions in the Eastern Grid.

http://www.eere.energy.gov/de/us_power_grids.html

Should I not be concerned with this & just choose a site or build a site with a higher level of redundant & back-up power? Is it more important to have the DR site in a location easily accessible for our technical experts than it is to have it on a different grid? Remember 911 grounded flights so if we had another event of that magnitude it would take days for my technical experts to get to our DR site if they could at all. Of course we can introduce many tools for full remote control & power control where access to our physical environment becomes less important so should I make it best practice to get that DR site on a separate grid? If I put locating my DR site into my key design criteria where should it fall on my priority list?

Monday, April 13, 2009

Going Green with Data Center Storage

Just saw an interesting article in Enterprise Strategies about the use of magnetic tape as an energy-efficient storage solution. In “Tape’s Role in the Green Data Center,” Mark Ferelli discusses how tape technology is making a comeback by helping to keep the data center green as utility bills rise. He explains:

The efficient use of disk can help with data center greening when a user reads and writes to the densest possible disk array to ensure capacity is maximized and more disk is not bought unnecessarily.

In archiving, on the other hand, the greenest option is tape, which uses less power and produces a lower heat output. This not only eases the bite of the utility bill but places less strain on HVAC systems. In contrast, the case can be made that using disk for archiving does more harm since disks that spin constantly use much more power and generate more heat.


Ferelli also takes a look at alternative power and cooling solutions, such as MAID (Massive Array of Idle Disks) storage arrays, in comparison with tape-based storage.

What’s been your experience with energy-efficient storage technology? Do tape-based systems offer better power savings versus disk-based solutions?

Friday, April 03, 2009

Google Unveils Server with Built-in Battery Design

For the first time on Wednesday, Google opened up about the innovative design of its custom-built servers.

The timing of the reveal, which coincided with April Fool’s Day, left some wondering if the earth shattering news was a prank. If it sounds too good to be true, it probably is, right? Not so in this case. In the interest of furthering energy efficiency in the industry, Google divulged that each of its servers has a built-in battery design. This means that, rather than relying on uninterruptible power supplies (UPS) for backup power, each of Google's servers has its own 12-volt battery. The server-mounted batteries have proven to be cheaper than conventional UPS and provide greater efficiency.

Google offered additional insights into its server architecture, its advancements in the area of energy efficiency, and the company’s use of modular data centers. For the full details, I recommend reading Stephen Shankland’s coverage of the event at CNET News. It’s fascinating stuff. Plus, Google plans to launch a site in a few days with more info.

Thursday, April 02, 2009

Can The Container Approach Fit Your Data Center Plans?

Conventional Data Center Facilities have now had a long history of difficulties in keeping up with the increasing demands of new server & network hardware so organizations are now looking for solutions that upgrade the facility with the technology upgrade, rather than continuing to invest millions in engineering & construction upgrades to support higher densities, the expense of having to build or move to new facilities that can handle these densities. Containers offer a repeatable standard building block. Technology has long advanced faster than facilities architecture and containerized solutions at least levels a large portion of the facility advance to the technology advance.

So why haven't we all already moved into Containerized Data Center Facilities and why are so many new facilities underway that have no plans for containers? Hold on Google just revealed for the first time that since 2005, its data centers have been composed of standard shipping containers--each with 1,160 servers and a power consumption that can reach 250 kilowatts. 1st Google showed us all how to better use the internet, now have they shown us all how to build an efficient server & Data Center? The container reduces the real estate cost substantially, but the kW cost only marginally, Google really focused its attention on efficiency savings at the server level, bravo! The weak link in every data center project will always remain the ability of the site to provide adequate redundant capacity emergency power & heat rejection. These issues do not go away in the container ideology. In fact, it could be argued that the net project cost in the container model could be greater since the UPS's & CRAC units often are located within the container, which will cause the overall count of them to be greater. Just as in any Data Center project rightsizing the utility power, support infrastructure & back up power to meet the short & long term goals of your key design criteria is the most important aspect to consider in any containerized project. What containers do accomplish is creating a repeatable standard & footprint for the IT load and how the power, air & communications are distributed to it. Organizations are spending billions of dollars planning & engineering those aspects in many cases to find out their solution is dated by the time they install their IT load. With containers when you upgrade your servers you are upgrading your power, air & communications simultaneously & keeping it aligned with your IT load.

What about the small & medium business market? Yes the containerized approach is a very viable alternative to a 100,000+ square foot conventional build, but what about the smaller applications? A container provides an all encompassing building block for technology & facility architecture but in a fairly large footprint. Not everyone has a need for 1400U's of space, 22,400 processing cores or the wherewithal to invest over $500K per modular component. Unless SMB's want to colocate or sign off to a managed service provider who is running their IT in a cloud in a new containerized Data Center, the container approach doesn't have a play for SMB or does it? There are certainly solutions in the market to help a SMB build their own smaller footprint high density movable enclosure or mini-container, it’s surprising there has been little focus on that much larger market. We are exploring some containerized approaches to the SMB market that would also address branch & division applications for large organizations where the container offerings today likely present too large a building block to be practical.

For more information about Containerized Data Centers & some of the methodologies for deployment I recommend Dennis Cronin's article in Mission Critical Magazine.

http://www.missioncriticalmagazine.com/CDA/Articles/Features/BNP_GUID_9-5-2006_A_10000000000000535271

And certainly the details on CNET about Google's Containers & Servers.

http://news.cnet.com/8301-1001_3-10209580-92.html

Wednesday, April 01, 2009

Data Center Power Drain [Video]

Click here to watch a recent news report from CBS5.com on what's being done to make San Francisco's data centers more energy efficient.

In the "On the Greenbeat" segment, reporter Jeffrey Schaub talks to Mark Breamfitt at PG&E and Miles Kelley at 365 Main about how utilities companies and the IT industry are working to reduce overall energy consumption. According to the report, each of 365 Main’s three Bay Area data centers uses as much power as a 150 story skyscraper, with 40 percent of that power used to cool the computers.

Wednesday, March 25, 2009

Spending on Data Centers to Increase in Coming Year

An independent survey of the U.S. data center industry commissioned by Digital Realty Trust indicates that spending on data centers will increase throughout 2009 and 2010.

Based on Web-based surveys of 300 IT decision makers at large corporations in North America, the study reveals that more than 80% of the surveyed companies are planning data center expansions in the next one to two years, with more than half of those companies planning to expand in two or more locations.

In addition, the surveyed companies plan to increase data center spending by an average of nearly 7% in the coming year. “This is a reflection of how companies view their datacenters as critical assets for increasing productivity while reducing costs," noted Chris Crosby, Senior Vice President of Digital Realty Trust.

To view the rest of the study findings, visit the Investor Relations section of DigitalRealtyTrust.com.

Thursday, March 19, 2009

Top 3 Data Center Trends for 2009

Enterprise Systems just published the “Top Three Data Center Trends for 2009” by Duncan Campbell, vice president of worldwide marketing for adaptive infrastructure at HP. In the article, Campbell discusses how companies need to get the most out of their technology assets and, in the coming year, data centers will be pressured to "maintain high levels of efficiency while managing costs". In addition, companies will need to make an up-front investment in their data center assets in order to meet complex business demands.

Campbell predicts:
  • “There will be no shortage of cost-cutting initiatives for enterprise technology this year.”
  • “As virtualization continues to enable technology organizations to bring new levels of efficiency to the data center, the line between clients, servers, networks and storage devices will continue to blur.”
  • “Blade offerings will continue to mature in 2009. Server, storage, and networking blades will continue to improve their energy efficiency and reduce data center footprints. Vendors are also now developing specialty blades, finely tuned to run a specific application.”

Efficiency, agility, and scalability will remain priorities for companies. By taking advantage of innovative data center technologies, companies can further reduce costs while increasing productivity – a goal that is of particular importance during challenging economic times.

Wednesday, March 11, 2009

It’s Nap Time for Data Centers

Yesterday at the International Conference on Architectural Support for Programming Languages and Operating Systems in Washington, D.C., researchers from the University of Michigan presented a paper, titled “PowerNap: Eliminating Server Idle Power”.

“One of the largest sources of energy-inefficiency is the substantial energy used by idle equipment that is powered on, but not performing useful work,” says Thomas Wenisch, assistant professor in the department of Electrical Engineering and Computer Science. In response to this problem, Wenisch's team has developed a technique to eliminate server idle-power waste.

Their paper addresses the energy efficiency of data center computer systems and outlines a plan for cutting data center energy consumption by as much as 75 percent. This would be accomplished through the concurrent use of PowerNap and the Redundant Array for Inexpensive Load Sharing (RAILS). PowerNap is an energy-conservation approach which would enable the entire system to transition rapidly between a high-performance active state and a near zero-power idle state in response to instantaneous load, essentially putting them to sleep as you would do with an ordinary laptop. RAILS is a power provisioning approach that provides high conversion efficiency across the entire range of PowerNap’s power demands.

The paper concludes:

PowerNap yields a striking reduction in average power relative to Blade of nearly 70% for Web 2.0 servers. Improving the power system with RAILS shaves another 26%. Our total power cost estimates demonstrate the true value of PowerNap with RAILS: our solution provides power cost reductions of nearly 80% for Web 2.0 servers and 70% for Enterprise IT.


To read the full text, please visit Wenisch’s site to download a PDF of the paper: http://www.eecs.umich.edu/~twenisch/?page=publications.php.

Monday, March 09, 2009

Finding the Silver Lining During an Economic Downturn

It seems, no matter which way you look these days, there’s more bad news. Job losses are up. The stock market is down. But not every business is focusing on the negative. In fact, there’s even a growing list of companies refusing to take part in the recession. As Jamie Turner at the 60 Second Marketer writes:

To be sure, times are tough. They’re downright B-A-D. But the world isn’t ending. The sky is not falling. In fact, you and your business will be here tomorrow and the next day — if you stop focusing on the negative and start focusing on the positive.


In light of this, I’d like to highlight one company who sees data center opportunity despite the poor economy: Juniper Networks. According to this article in Network World, Juniper has “launched an aggressive campaign to expand its enterprise business with a targeted assault on the data center.” They’ve announced a project, called Stratus, which their blog describes as an attempt to “create a single data center fabric with the flexibility and performance to scale to super data centers, while continuing to drive down the cost and complexity of managing the data center information infrastructure.”

And why announce Stratus now? Tom Nolle, president of consultancy CIMI Corp, explains: “Juniper cannot hope to match Cisco in breadth so it is making that an asset instead of a liability. Juniper is timing its success with Stratus to the economy's recovery and to developing symbioses with partners.”

That’s the kind of strategic, fighting spirit that helps a company come out on top, wouldn’t you say?

Friday, February 20, 2009

Improving Mobile Applications in the Enterprise

Look for Michael Petrino, vice president of PTS Data Center Solutions, in the latest issue of PROCESSOR (Vol.31, Issue 8).

In "Essential Mobile Tools: Maximize Your Mobile Toolset to Better Unlock Wireless’ Potential", Petrino shares his thoughts on the importance of establishing the right power infrastructure in order to improve the broadcast range of on-campus wireless connections.

The article discusses several easy-to-implement ways that enterprises can make better use of mobile applications so that they can support mobile employees without placing an unnecessary burden on the data center or IT support teams. It features insights from Robert Enderle, an analyst for the Enderle Group, and Joel Young, CTO and senior vice president of R&D at Digi International.

To read the full article, please visit PROCESSOR.com.