Monday, February 22, 2010

Considerations for Storage Consolidation

The growth of company files, e-mail, databases, and application data drives a constant need for more storage. But with many networks architected with storage directly attached to servers, growth means burdensome storage management and decreased asset utilization. Storage resources remain trapped behind individual servers, impeding data availability.

There are three storage consolidation architectures in common use today:
  • direct-attached storage (DAS),
  • network-attached storage (NAS), and
  • the storage area network (SAN).

DAS structures are traditional in which storage is tied directly to a server and only accessible at that server. In NAS, the hard drive that stores the data has its own network address. Files can be stored and retrieved rapidly because they do not compete with other computers for processor resources. The SAN is the most sophisticated architecture, and usually employs Fibre Channel technology, although iSCSI-based technology SANs are becoming more popular due to their cost effectiveness. SANs are noted for high throughput and their ability to provide centralized storage for numerous subscribers over a large geographic area. SANs support data sharing and data migration among servers.

So how do you choose between NAS, RAID and SAN architectures for Storage Consolidation? Once a particular approach has been decided, how do you decide which vendor solutions to consider? There are a number of factors involved in making a qualified decision including near and long term requirements, type of environment, data structures, budget, to name a few. PTS approaches Storage Consolidation by leveraging our proven consulting approach:
  • to gather information on client needs,
  • survey the current storage approach, and
  • assess future requirements against their needs and the current approach.

Critical areas for review and analysis include:
  • Ease of current data storage management
  • Time spent modifying disk space size at the server level
  • Storage capacity requirements to meet long term needs
  • Recoverability expectations in terms of Recovery Time Objectives and Recovery Point Objectives
  • Needed structuring of near- and off-line storage for survivability and ease of access to data
  • Security needed to maintain data storage integrity
  • Evolving storage complexity if current architecture is maintained
  • New applications considered for deployment
  • Requirement to provide Windows clustering
  • Interest in considering Thin Provisioning
  • Storage spending as a percentage of total IT budget
PTS reviews all of the items above, and more --- we then design the best storage architecture for both near and long term requirements and are able to source, install and manage leading edge storage solutions from companies such as Dell and Hitachi.

Ultimately, Storage Consolidation positively impacts costs associated with managing your IT network in terms of redundancy, disaster recovery, and network management. It also allows for a more secure network, free from wasted assets tied to particular servers or data center components. Finally, the tasks of provisioning, monitoring, reporting, and delivering the right storage services levels can be time consuming and complex and Storage Consolidation will enhance your ability to manage your organization's data storage.

Tuesday, February 09, 2010

“Devils in the Details” Data Center Event - IMPORTANT UPDATE

** EVENT POSTPONED DUE TO INCLEMENT WEATHER **

The data center management event planned for tomorrow night, The Devils in the Details - Enhanced SAN & Switching Solutions for Next Generation Data Centers, has been rescheduled for March 30, 2010 due to the forecasted snow storm.

If you have questions regarding event tickets, please contact Amy Yencer at AYencer@ptsdcs.com.

For more details and the full agenda, visit our Data Center Management Event page. We hope to see you in March!

BLADE Network Technologies Wins Top Spot in 10G Data Center Switch Test

Congratulations to BLADE Network Technologies, PTS’ top-of-rack switch vendor and a trusted leader in data center networking, on winning the top spot in the 10G data center switch competition.

BLADE's RackSwitch G8124 received Network World's Clear Choice award in its lab test of top-of-rack 10G Ethernet data center switches for delivering a winning combination of features and performance as well as top energy efficiency. The BLADE product faced stiff competition from switches produced by Arista Networks, Cisco, Dell, Extreme and HP, all of which sported at least 24 10Gigabit interfaces. The products faced a 10 point comparison and were subjected to three months of demanding performance tests.

To read the complete test review, visit http://www.networkworld.com/reviews/2010/011810-ethernet-switch-test.html.

Thursday, February 04, 2010

Why are so many still using guesswork to determine their needs for power?

It is 2010 & so many data center & IT managers are still relying on manual derated name plate calculations to manage the power required throughout their power chain even though many of these data centers are on the verge of running out of power & many have experienced outages due to tripped circuits. So many data center & IT managers come to us looking for real-time monitoring of power, many solutions are evaluated, but few ever get implemented. I'm trying to figure out why many are not investing in real-time power management.

If you read the Green Grid's white paper "Proper Sizing of IT Power and Cooling Load" it discusses the fluctuations in IT power draw due to inlet temperature changes, server component changes, virtualization, etc. http://www.thegreengrid.org/en/Global/Content/white-papers/Proper-Sizing-of-IT-Power-and-Cooling-Loads

I don't think we can underestimate the potential danger in using derated nameplate information to calculate power requirements. Unvirtualized servers typically use 15% of the processing power, virtualized we see #'s in the 60-95% range of processing utilization, this directly correlates to #'s closer to nameplate values as the Green Grid pointed out in the white paper. Most IT organizations are rapidly adapting virtualization technology to consolidate and operate more efficiently at the same time, which is a good thing, but it is putting rapid pressure on previously underutilized power infrastructures in data centers.

With so many variables to account for how can one depend on derated calculation tools? With so many real-time tools available to measure & trend power accurately including; branch circuit monitoring, outlet level monitored power strips, in-line power meters, IPMI and extensive software options why are so many still trying to use derated calculations to guesstimate the power they'll need for higher density virtualized deployments? This guesswork leads to potential circuit breaker trips & designed inefficiencies throughout the entire power chain. I am amazed with rising power costs, less power capacity available and so many looking to operate a more efficient "greener" data center footprint that so few are investing in real-time power monitoring tools that will allow them to plan & manage capacity effectively.