Thursday, April 11, 2013

Critical Considerations during a Data Center Migration

If you've got more than a rack or two in your data center or computer room, a data center migration is rife with risk. Who wants to lie awake in the weeks before the migration wondering if they've missed something? Will everything go smoothly? Did I make the right choices for services companies, infrastructure upgrades, network service providers, etc.?

In a nutshell, planning and perspective are good critical as data center managers when it's time to complete a migration (or consolidation) of data center assets. Planning and perspective allow you to take a step back and make sure your approach holds water, allow you to check with peers in the industry for accepted best practices, and allow you to keep your job when the migration goes smoothly.

Critical Considerations in Preparation for a Data Center Migration include:
  • Think About the Layout. Flow through a data center is critical to develop efficiencies. Flow includes power from utility through distribution to feeders to PDUs as well as battery backup and utility backup (generators) and is driven by a coherent data center design. In addition to power, think about network connectivity from the ingress at the street through to the network core. Also, how will data flow from core to distribution to access out to server/storage assets. A simple rule of thumb: Firewalls, DMZs, and network termination equipment should all be located close to the network entrance and/or network rack.
  • Plan for Growth. It isn't enough to plan for growth within today's paradigm and technology. Rather, if at all possible, it's critical to consider the next two life cycles in technology. This means performing research on expected future rack power requirements as well as the data center key design criteria for today and 2-3 years into the future. Who would have thought 5 kW of redundant power at the rack may not be enough now if you're organization is planning to roll out blade server cabinets? Don't get caught having to migrate yet again.
  • Plan the Cable Plant. Cabling architecture is the backbone of the data center network infrastructure. Careful planning and consideration is important when deciding on a data center cabling architecture. Key concerns are scalability, flexibility, manageability, availability, and total cost. Therefore, it is critical to plan in advance, leave space for core switches and future growth for the core and distribution switches and cable plan. Also, particularly if you are using a raised floor approach, deploy your cabinets, pull fiber to the cabinets, and run branch circuits for power. The incremental cost of the fiber and power cables waiting for use is minimal, you already have the labor onsite, and who wants an invasive change or upgrade several years down the road.
  • Confirm the Asset Inventory. A data center migration gives you the opportunity to "clean out your attic". Like moving between homes, you shouldn't migrate or relocate assets that are decommissioned or not in the data center inventory list. Assets should be in the your Configuration Management Database (CMDB) including owner, department, business processes, applications, and dependencies. In fact, all data center assets should be tracked and maintained before the migration and after it takes place.
  • Develop a Complete Relocation Plan. The final step in the data center migration is the relocation itself. Data Center relocations are expensive and require specific expertise and experience. Elements of a solid relocation plan include: Pre-planning and project management, pre-move site preparation, move plan creation, and post-move reviews.
Ultimately, a Data Center Migration requires careful planning, continuous communications, solid contributions from internal and external team members, and risk mitigation plans if/when the unexpected happens. Data Center Consulting Services are available from the consultants at PTS Data Center Solutions.

Wednesday, April 03, 2013

Have You Made the Move to Wireless Monitoring?

PTS Data Center Solutions Consultants are finding most data center facilities operators don’t want to burden or use their expensive network infrastructure to address environmental & power monitoring solutions. We all know we can’t manage what we don’t measure, but often the resistance of facilities and IT wanting to work together to address the problem, prevents both groups from effectively monitoring environmental conditions on the network to optimize our Data Centers.

Wireless sensor solutions can not only eliminate the resistance we might get in trying to get network infrastructure allocated for environmental monitoring, but wireless sensors provide the ability to quickly deploy, scale the solution and the flexibility to move sensors around for testing or as we are deploying new equipment. The wireless environmental market is growing quickly and here are some of the solutions PTS has evaluated and implemented. Some solutions just provide the monitoring while others provide analytical analysis and/or control software. We are interested in hearing from others on their experiences and why wireless monitoring has or has not worked in your data center.

Innovative patent-pending technology couples a series of 8 temperature sensors with an array of high intensity LEDs. This design provides the appearance of a “live” CFD by visually displaying a range of cool (blue) to hot (red) and a blend of 129 colors in between. The Aurora is the perfect self-policing, real-time troubleshooting tool to clearly identify potential cooling or heat-related issues in your racks. Aurora is extremely accurate because it measures air temperature, not surface temperature. The 3 User-Selectable Sensitivity Settings allow you to fine tune the monitored temperature range of the full 129-color spectrum. Because temperature is monitored over the entire height of the cabinet, Aurora is perfect for Aisle Containment and areas susceptible to temperature stratification. Optional Wireless Communication and Management Interface enables the temperature readings for all 8 sensors per strip to be captured for trending and alerting purposes.

Packet Power
Uses a Wireless mesh network to monitor inline power meters, temperature, humidity and air pressure that the management of complex facilities require. The data collected from these sensors can be managed in a Cloud portal called EMX, or you can run Power Manager from Packet Power or you can just use a gateway for SNMP Connectivity and Modbus TCP/IP Connectivity link the wireless monitoring devices to your existing data center monitoring software.

RF Code
Manufacturers RFID environmental tags, temperature, humidity, pressure and PDU tags that work with ServerTech, Geist, Raritan metered and switched PDU’s. They have RFID asset tags as well if you want to monitor where assets are even down to what U they are installed in your racks. Each 433.92MHz RFID reader can support up to 1,400 RFID tags and the reader can support communications to multiple wired or wireless networks to report the sensor information into various software packages or you can use Sensor Manager to collect information from all types of RF Code wire-free sensor tags. Sensor Manager organizes all sensor information according to sensor type as well as sensor location. All information collected by Sensor Manager can be viewed interactively in real-time via an easy to use web browser based console. All information can be accessed via customized table views as well as graphically via map views. All historical data can be easily organized into reports and graphs using the standard reporting and graphing capability as well as RF Code’s Advanced Reporting Module which utilizes the powerful open source BIRT reporting engine.

Uses a Wireless mesh network to monitor:
  • Server inlet temperatures
  • Delta T across CRAC units
  • Humidity and calculate dew points
  • Subfloor pressure differentials
SynapSense wireless environmental monitoring and Data Center Optimization Platform software provides real-time visibility to assess current data center operating conditions, including generating a temperature gradient to identify operational or energy efficiency opportunities, and quantify improvements.

Vigilent energy management systems are built upon a sophisticated, wireless mesh network using technology developed by Dust Networks®, the leader in industrial wireless networking. This implementation is designed for the most demanding industrial applications, in harsh environments where packet delivery is critical.
These wireless sensors are installed at CRAC/CRAH supply and return as well as rack inlets to determine the zone of influence and the impact as air handlers are cycled down or turned off to optimize the cooling to the demand of the IT footprint.

Wireless Sensors
The SensiNetRack Sentry is a wireless temperature monitoring device and component of the SensiNet wireless sensor network. It reports highly accurate, real-time ambient level temperature measurements, without wires, and is FCC and CE-approved for license free operation worldwide. The Rack Sentry utilizes a solid state sensor in a unique configuration for ultimate installation flexibility. Individual sensors are “daisy chained” using standard CAT5 patch cables. Up to three sensors are supported as standard and these sensors can be added and or reconfigured in the field. The system simply recognizes the attached sensors and reports temperature with virtually no user configuration.

The Rack Sentry utilizes highly accurate, MEMS solid state sensors and a replaceable “C” size battery provides years of reliable operation. The SensiNet Services data acquisition Gateway is a powerful appliance providing network management, user interface, data logging, trending, alarming and communications without any complicated software to install. A standard browser and network connection is all that’s required to access and configure the system. The GWAY-1022 also operates as stand-alone data logger with real time views, trending and e-mail alerts.

With the various choices and solutions described above, it may help to discuss your requirements with a Data Center Solutions professional from PTS.

Sunday, March 17, 2013

Is Single Pane of Glass Overemphasized by the Data Center Infrastructure Management Industry?

I believe many seeking the "Holy Grail" of Data Center Management, a Single Pane of Glass to manage and monitor their Data Center and IT infrastructure are about as successful as the archeologists seeking the divine cup. I've seen many enterprise Data Centers come to the conclusion that they aren't ready for a Single Pane of Glass tool after sending out RFI's seeking such a tool. Is it realistic to think that an enterprise Data Center can get everything it needs to effectively monitor, manage and optimize on a Single Pane of Glass? Does this single pane then become such a crowded screen that the alerts and alarms become lost? Can a single pane be used to monitor, manage and optimize all of the assets and systems that are critical to the success of our Data Center's performance and availability?

Can a Single Pane of Glass Realistically
Manage your Data Center Infrastructure?

Where I think we first need to focus our attention in the evolution of Data Center monitoring and management is getting all data from systems that are discovering assets, monitoring system conditions and performance into a CMDB so all of our software tools can utilize this important data. IT, Facilities and executive management then are all using the same data to work as a team to address issues and optimize the Data Center and IT infrastructure's performance. Obtaining this data and verifying that this data is correct before it is entered into a CMDB is a huge challenge and few organizations have accomplished this feat. Many have failed in attempts to gather too much of this data manually. Organizations can typically expect a 10% error rate in manual data entry due to typing and transcribing errors. Can we afford to be making decisions about the capacity, performance and availability of our Data Centers with a 10% error rate? Before we can even think about Single Pane of Glass we have to implement a CMDB strategy that includes real-time data collection and accuracy validation.

I'd be interested in hearing where your organization is at the evolution of your Data Center & IT infrastructure management and whether you agree or disagree with my focal points for success.

Tuesday, February 19, 2013

PTS Exhibiting at 3rd Annual HITECH Symposium for Healthcare Related IT Solutions

PTS Data Center Solutions is exhibiting at the Third Annual Mid-Atlantic Crossing the Infrastructure & HITECH Meaningful Divide Symposium. 

The event is Entitled: “Patients, Care Givers, and Technology: Partners in Care" and will take place on March 21st and 22nd at the Radisson Valley Forge in Pennsylvania.

For those of you unfamiliar with HITECH, the HITECH Act was established with the primary goal of improving the population’s health and the quality and cost of healthcare. One particular focus area is in the ability to provide electronic medial records for patients to service providers anywhere in the world via proper, HIPAA-compliant, sharing of these records anywhere the patient may happen to be.

The symposium includes a series of seminars and presentations related to IT issues and problems experienced by IT professionals in the healthcare sector. In addition, there is an exhibit hall for vendors to present solutions targeting healthcare IT.

PTS has world class design, engineering, construction, and management staff across both facility and IT disciplines. This integrated data center facility and IT expertise affords PTS a unique vantage point for executing data center, computer room, and network operations center projects for both healtcare service providers and hospitals as well as many other market sectors. We can build, redesign, consolidate or relocate your computer room as well as provide many IT-related services and solutions:

  • Routing & Switching
  • Information / Network Security
  • Servers & Systems
  • Virtualization Technologies
  • Data Protection & Storage
  • Unified Communications
  • Microsoft Exchange & Active Directory
  • Application Development
  • Software Development
Learn more about the HITECH Symposium or Register for the Event.

Thursday, February 14, 2013

Data Center Energy Use

It can’t be denied that the amount of energy data centers consume is sickening and constantly growing on a daily basis, but the data centers themselves should not be held fully responsible for adhering to the demands of the consumer. Today’s society calls for 24x7x365 availability and the future for most companies lies in the hands of uninterrupted availability. For most data center technicians, their jobs rely on 99.99 percent availability and not saving on the electric bill. This fear of failure mixed with the high expectations of the end-user is what’s causing this massive surge of data center energy use.

James Glanz recently wrote a piece for the New York Times entitled, “Power, Pollution and the Internet.” Although his article lacks proof, it brings to light an important secret of the data center industry: data centers are gargantuan energy consumers. Personally, I think it was harsh for him to say corporations are wasting a good two-thirds of the energy they consume, because data centers for companies such as Facebook and YouTube need to be run around the clock.

Steve Dykes for The New York Times
INSURANCE A row of backup generators, inside white housings, lines the back exterior of the Facebook data center in Prineville, Ore. They are to ensure service even in the event of a power failure.
People don’t realize the vast amount of data it takes to allow them to watch a video on the internet through a website that is quite possibly hosting tens of millions of other users. Or how about that video game you’re playing on Facebook? And while we’re at it, how about your entire Facebook profile? All that data is stored for you in one of Facebook’s many data centers. They need to keep it accessible for you so you can play at anytime, anywhere.

So, who’s at fault? Can the answer be no one? We either need to accept the fact that data centers need the energy to meet the demands of the consumer or we, as consumers, must be patient and lower our expectations, but let’s face it, in the words of the mighty Queen, “I want it all, I want it all, I want it all, and I want it NOW!”

In the end, aside from risking potential downtime by reducing data center redundancies or powering down servers when not in use, data center operators can look to energy efficiency improvements aimed at avoiding increased risk of downtime. PTS Data Center Solutions performs Data Center Energy Efficiency Assessments on behalf of utilities and data center operators. However, reducing the number of data centers and their sizable energy consumption is not going to happen in the near future.

Friday, February 01, 2013

PTS Plays Role in Conservation by Building New Data Center for the World Wildlife Fund

PTS Data Center Solutions recently performed an assessment of the World Wildlife Fund’s (WWF) data center at its headquarters in Washington, D.C. That’s a pretty big deal considering WWF is the world’s leading conservation organization with total operating revenue of over $230 million. WWF networks through 100 countries with over 5 million members, so its data center is a very important part of overall conservation operations.

PTS was able to detect a critical problem with WWF’s data center environment. The data center was experiencing an increase in heat and the Computer Room Air Conditioning units weren’t getting the job done. WWF was in dire need of new power and cooling solutions. IT infrastructure availability and energy efficiency were also vital concerns as they are with all of PTS’ clients.

At first, PTS was considering renovating WWF’s aging infrastructure, but when the tenant on WWF’s first floor moved out, PTS determined an entirely new data center in that space would best suit WWF. PTS was tasked with design, construction management, procuring  equipment, overseeing installation, commissioning, and post construction services. WWF received a dynamic cooling solution which gives the data center the energy efficiency that was desired. PTS also installed a 100 KVA UPS which gave WWF critical power protection.

“The use of modular systems is an excellent strategy to address growth without major disruptions”, said Michael Petrino, PTS Vice President. “WWF is now operating a reliable, energy efficient data center. With the new, energy efficient cooling solution in place, the WWF data center is able to conserve significant amounts of energy and allow the WWF to practice internally its mission of conservation of natural resources.”

To read more about PTS’ success with WWF click here or contact us for a copy of the case study and the Press Release.

Friday, January 18, 2013

PTS Data Center Solutions Completes Planar Digital Signage Deployment in NYC

PTS Data Center Solutions recently designed and deployed the Planar Clarity™ Matrix LCD Video Wall digital signage solution for Prudential Douglas Elliman. The display is located on Broadway, between 9th and 10th Streets in New York City.

The Clarity™ Matrix LCD Video Wall System delivers the ultimate display solution for digital signage applications. Optimized for uninterrupted 24/7 operation, Clarity™ Matrix is an ultra-thin bezel LCD media wall system that delivers outstanding visual performance, supports extended operation and requires minimal installation space.

Contact PTS to learn more about digital signage solutions as well as digital display solutions for Network Operations Centers.

Wednesday, January 02, 2013

Event Follow-up: Is Your Disaster Recovery Approach a Disaster?

PTS Data Center Solutions, in conjunction with Quorum, hosted a particularly relevant event on December 4th. With over 20 industry executives and Backup & Disaster Recovery experts meeting at the Chart House in Weehawken, NJ, PTS and Quorum discussed the need for improved backup and disaster recovery solutions aimed at the Small- to Mid-size business sector.

"The event was originally scheduled for November 7th but we all know what had just taken place the week before - Hurricane Sandy", said Larry Davis, VP, IT Solutions Group for PTS. "If we could have only spread the word earlier and gotten the Quorum solution out to clients without a clear Disaster Recovery plan, the solution really works for a reasonable price."

Developed by Quorum engineers several years ago as a simple to deploy and use alternative to expensive redundant server, storage, and virtualization platform approaches, the Quorum solution has been a hit within market sectors ranging from:
  • Schools
  • Banks
  • Financial Services
  • Law Practices
  • Accounting Firms
  • Manufacturers
  • Municipalities
With premises-based appliances, cloud solutions available for offsite recovery, and archive systems for long term storage requirements, the Quorum onQ solution can be deployed rapidly without any other hardware or software needed.

At the event, Quorum engineers provided a live demonstration of a server failure and the One-Click Recovery™ inherent in the onQ solution's design:
  • Current Forever: Each ultra-efficient update is merged into the onQ device which houses virtual machine recovery nodes, full current images of client servers and virtual servers.
  • Ready-to-Run: The approach doesn't wait until you need to recover to build your virtual recovery nodes, allowing one-click recovery at any time.
  • Point-in-Time Recovery: Even though changes are merged into the ready-to-run recovery node, you can restore files or an entire system to a prior state. This is a perfect fit for business and organizations needing the ability to store and recover 7 years of data for regulatory purposes.
To learn more, visit PTS' website, watch the onQ video on the YouTube Data Center channel, or contact PTS at

Wednesday, December 19, 2012

NJ Tech Council: Adaptation of DCIM Tools Rages On

The first annual New Jersey Technology Council (NJTC) Data Center Summit was a real success. With upwards of 150 data center professionals attending, the first panel discussion focused upon Data Center Infrastructure Management (DCIM) Challenges & Opportunities.

The first panel speaker, Peter Sacco, President of PTS Data Center Solutions, provided a solid overview of the DCIM sector, functional areas, and challenges faced by both manufacturers and end clients. He then put manufacturers on notice. Mr. Sacco stated there are 100+ companies producing hardware, software, and/or platforms for DCIM. The problem is that typically each company’s offering does one or two of the functional requirements well, others less well, and others not at all. Worse, little effort is made to work with one another although that is becoming less so as providers are realizing their own limitations.

As such, what data center managers really seek for DCIM, easy access to meaningful data that seamlessly correlates to actionable plans, has yet to be realized. In support of this supposition, Pete mentioned the Uptime Institute’s 2010 paper Data Center Infrastructure Management: Consolidation, But Not Yet which notes the market for data center infrastructure management systems will grow from $500 million in 2010 to $7.5 billion by 2020. So far, this hyper growth hasn't materialized as the holy grail of DCIM has been stunted by under powered solutions or solutions that are difficult to deploy.

The remainder of the DCIM panel discussion centered upon manufacturer and user challenges, new developments within the industry, and future directions as panelists compared existing solutions and viability of current deployments.

Beyond the DCIM panel, a second panel discussion focused on Lessons Learned from the aftermath of Hurricane Sandy. Various disaster recovery approaches, processes, and solutions were debated by the panelists. The event also included exhibits with lively discussions around many current hot topics in the data center community.

To learn more about Mr. Sacco's perspectives on DCIM, contact him via email, or download Pete's latest white paper Data Center Infrastructure Management - The Updated Elephant which provides a detailed review of the market for DCIM solutions. Additional DCIM solutions are available on the PTS website. More information on the Data Center Summit is available at Data Center Knowledge.

Friday, November 30, 2012

Asset Performance Management for IT and Data Centers

We are all likely using some tool to track IT assets. To that end, many use their asset management tool to manage the lifecycle of their IT and Data Center assets. However, most tools that provide lifecycle management of IT assets only look at the depreciation value of the asset or perhaps the cost to maintain the asset. Unfortunately, how can we really understand the lifecycle of an asset without also looking at the performance of this IT asset versus the costs to operate the asset and/or versus the cost to operate a new asset? With the rising costs of operating IT assets, PTS thinks it is time to borrow an idea from our manufacturing brethren who have developed tools to manage the performance and to optimize the production of their plant. After all, isn't a Data Center merely a manufacturing plant for processing and storing data? Much like a power plant wants to optimize electricity produced per unit of fossil fuel burned, we as operators of Data Centers need to optimize the IOPs (Input/Output Per Second) produced per kW (KiloWatt) consumed. The difference is there are numerous Asset Performance Management{APM} software tools for the manufacturing world to help optimize the plant by managing such issues as: • Reducing Operational Costs • Extending Asset Life • Delivering Higher Performance with Reduced Resources • Compliance with Regulations & Standards • Standardizing Asset Care Process or Practices • Dealing with Data Management & Islands of Data • Safety and Environmental Performance • Time-based PM tasks and the Need for CBM • Aging Workforce, Loss of Knowledge All of these issues addressed by today's manufacturing APM software tools, in PTS' opinion need to be addressed by APM tools for IT and Data Centers. In conjunction with several of our key partners, PTS has been leading the way with tools that provide the analytics to evaluate the performance of our IT and Data Center assets. Tools must analyze the performance of IT assets as well as quantify and recommend whether to retire, replace, consolidate, or maintain these IT assets:
Optimization tools should put facilities metrics on the same screen as IT metrics allowing data center operators the ability to view and close the gaps between planned capacity and IT and facility energy usage:
Tools need to go beyond a basic measurement like PUE (Power Usage Efficiency) that can be skewed by underutilized or rogue IT equipment and look at Server Compute Efficiency, which is the number of primary processes performed by the server versus the watts consumed by that server:
Finally, data center personnel cannot optimize the performance of their data center assets without managing the labor, tasks, parts and contracts needed to keep our entire data processing plant functioning 7x24:
We've highlighted just a few of the tools PTS has brought together to tackle the problem and build the foundation of Asset Performance Management for IT and Data Centers. It's comical when you think about how the IT and Data Center Industry prides itself on technology and software, yet one could argue there is better software available today to manage and optimize a paper mill than a data center. Wondering if lack of knowledge on the value and performance of IT and Data Center Assets are a problem in your organization and wondering what your organization is doing to address APM for your IT and Data Centers?