Wednesday, June 12, 2013

Tips for use of VMware vCenter Site Recovery Manager

There are various ways to deliver Backup & Disaster Recovery for your enterprise. Backup, which is a necessary requirement for Disaster Recovery includes tape, local disk, remote disk, or some other means of storing your data in case of IT equipment failure or loss. For Disaster Recovery,  PTS Data Center Solutions has presented solutions which include all-in-one appliances, co-location disaster recovery service providers and Storage Area Network or SAN replication. VMware vCenter Site Recovery Manager (SRM) is an excellent approach to consider given:
  • Automated migration and site recovery
  • Integration with your virtualized environment if you already leverage VMware solutions
  • Non-disruptive testing on the site recovery environment
  • Simple recovery plan management 
vCenter SRM will require replication of your server and storage environment offsite at a secondary, disaster recovery site. However, with the right expertise and experience, the control and consistent failover results in a manageable disaster recovery plan. VMware provides a series of technical tips for consideration when you are ready to move forward:
  1. Start small with a single application or service before implementing across your entire enterprise
  2. Learn and address application dependencies to confirm applications are available at the recovery site for the services that must run there
  3. Determine the best replication tool (VMware or a 3rd party) for your situation
  4. Load the recovery environment with data even if it is slightly stale to synchronize quickly.
  5. Organize data by logical failover groups
  6. Make sure storage replication adapters are up to date
  7. Orchestrate the sequence in which VMs start at the recovery site to prioritize key groups and their dependencies
  8. Build multiple recovery plans with common protection groups that fail over together
  9. Make sure your VMware software is up-to-date at all times
  10. Perform frequent recovery plan testing, particularly in advance of any storm warnings
To learn more, contact PTS or download the VMware vCenter Site Recovery Manager Tech Tip (registration required).

Friday, May 03, 2013

E&Y Names PTS President a Finalist for the 2013 Entrepreneur of the Year Award

PTS Data Center Solutions President Pete Sacco
PTS Data Center Solutions is proud to announce it's President & CEO, Pete Sacco, has been named Ernst & Young Entrepreneur of the Year® 2013 Award in the New Jersey Region.

The awards program recognizes high-growth entrepreneurs who demonstrate excellence and extraordinary success in such areas as innovation, financial performance and personal commitment to their businesses and communities. These finalists were selected by a panel of independent judges. Award winners will be announced at a special gala event on Thursday, June 13, 2013 at the Hyatt New Brunswick.

Pete was surprised and excited to learn about being named a finalist for such a prestigious award. He's had the entrepreneurial bug for many years and has started or been a part of a founding team for five startups in the last 16 years.

Thursday, April 11, 2013

Critical Considerations during a Data Center Migration

If you've got more than a rack or two in your data center or computer room, a data center migration is rife with risk. Who wants to lie awake in the weeks before the migration wondering if they've missed something? Will everything go smoothly? Did I make the right choices for services companies, infrastructure upgrades, network service providers, etc.?

In a nutshell, planning and perspective are good critical as data center managers when it's time to complete a migration (or consolidation) of data center assets. Planning and perspective allow you to take a step back and make sure your approach holds water, allow you to check with peers in the industry for accepted best practices, and allow you to keep your job when the migration goes smoothly.

Critical Considerations in Preparation for a Data Center Migration include:
  • Think About the Layout. Flow through a data center is critical to develop efficiencies. Flow includes power from utility through distribution to feeders to PDUs as well as battery backup and utility backup (generators) and is driven by a coherent data center design. In addition to power, think about network connectivity from the ingress at the street through to the network core. Also, how will data flow from core to distribution to access out to server/storage assets. A simple rule of thumb: Firewalls, DMZs, and network termination equipment should all be located close to the network entrance and/or network rack.
  • Plan for Growth. It isn't enough to plan for growth within today's paradigm and technology. Rather, if at all possible, it's critical to consider the next two life cycles in technology. This means performing research on expected future rack power requirements as well as the data center key design criteria for today and 2-3 years into the future. Who would have thought 5 kW of redundant power at the rack may not be enough now if you're organization is planning to roll out blade server cabinets? Don't get caught having to migrate yet again.
  • Plan the Cable Plant. Cabling architecture is the backbone of the data center network infrastructure. Careful planning and consideration is important when deciding on a data center cabling architecture. Key concerns are scalability, flexibility, manageability, availability, and total cost. Therefore, it is critical to plan in advance, leave space for core switches and future growth for the core and distribution switches and cable plan. Also, particularly if you are using a raised floor approach, deploy your cabinets, pull fiber to the cabinets, and run branch circuits for power. The incremental cost of the fiber and power cables waiting for use is minimal, you already have the labor onsite, and who wants an invasive change or upgrade several years down the road.
  • Confirm the Asset Inventory. A data center migration gives you the opportunity to "clean out your attic". Like moving between homes, you shouldn't migrate or relocate assets that are decommissioned or not in the data center inventory list. Assets should be in the your Configuration Management Database (CMDB) including owner, department, business processes, applications, and dependencies. In fact, all data center assets should be tracked and maintained before the migration and after it takes place.
  • Develop a Complete Relocation Plan. The final step in the data center migration is the relocation itself. Data Center relocations are expensive and require specific expertise and experience. Elements of a solid relocation plan include: Pre-planning and project management, pre-move site preparation, move plan creation, and post-move reviews.
Ultimately, a Data Center Migration requires careful planning, continuous communications, solid contributions from internal and external team members, and risk mitigation plans if/when the unexpected happens. Data Center Consulting Services are available from the consultants at PTS Data Center Solutions.

Wednesday, April 03, 2013

Have You Made the Move to Wireless Monitoring?

PTS Data Center Solutions Consultants are finding most data center facilities operators don’t want to burden or use their expensive network infrastructure to address environmental & power monitoring solutions. We all know we can’t manage what we don’t measure, but often the resistance of facilities and IT wanting to work together to address the problem, prevents both groups from effectively monitoring environmental conditions on the network to optimize our Data Centers.

Wireless sensor solutions can not only eliminate the resistance we might get in trying to get network infrastructure allocated for environmental monitoring, but wireless sensors provide the ability to quickly deploy, scale the solution and the flexibility to move sensors around for testing or as we are deploying new equipment. The wireless environmental market is growing quickly and here are some of the solutions PTS has evaluated and implemented. Some solutions just provide the monitoring while others provide analytical analysis and/or control software. We are interested in hearing from others on their experiences and why wireless monitoring has or has not worked in your data center.

Aurora
Innovative patent-pending technology couples a series of 8 temperature sensors with an array of high intensity LEDs. This design provides the appearance of a “live” CFD by visually displaying a range of cool (blue) to hot (red) and a blend of 129 colors in between. The Aurora is the perfect self-policing, real-time troubleshooting tool to clearly identify potential cooling or heat-related issues in your racks. Aurora is extremely accurate because it measures air temperature, not surface temperature. The 3 User-Selectable Sensitivity Settings allow you to fine tune the monitored temperature range of the full 129-color spectrum. Because temperature is monitored over the entire height of the cabinet, Aurora is perfect for Aisle Containment and areas susceptible to temperature stratification. Optional Wireless Communication and Management Interface enables the temperature readings for all 8 sensors per strip to be captured for trending and alerting purposes.

Packet Power
Uses a Wireless mesh network to monitor inline power meters, temperature, humidity and air pressure that the management of complex facilities require. The data collected from these sensors can be managed in a Cloud portal called EMX, or you can run Power Manager from Packet Power or you can just use a gateway for SNMP Connectivity and Modbus TCP/IP Connectivity link the wireless monitoring devices to your existing data center monitoring software.

RF Code
Manufacturers RFID environmental tags, temperature, humidity, pressure and PDU tags that work with ServerTech, Geist, Raritan metered and switched PDU’s. They have RFID asset tags as well if you want to monitor where assets are even down to what U they are installed in your racks. Each 433.92MHz RFID reader can support up to 1,400 RFID tags and the reader can support communications to multiple wired or wireless networks to report the sensor information into various software packages or you can use Sensor Manager to collect information from all types of RF Code wire-free sensor tags. Sensor Manager organizes all sensor information according to sensor type as well as sensor location. All information collected by Sensor Manager can be viewed interactively in real-time via an easy to use web browser based console. All information can be accessed via customized table views as well as graphically via map views. All historical data can be easily organized into reports and graphs using the standard reporting and graphing capability as well as RF Code’s Advanced Reporting Module which utilizes the powerful open source BIRT reporting engine.

SynapSense
Uses a Wireless mesh network to monitor:
  • Server inlet temperatures
  • Delta T across CRAC units
  • Humidity and calculate dew points
  • Subfloor pressure differentials
SynapSense wireless environmental monitoring and Data Center Optimization Platform software provides real-time visibility to assess current data center operating conditions, including generating a temperature gradient to identify operational or energy efficiency opportunities, and quantify improvements.

Vigilent
Vigilent energy management systems are built upon a sophisticated, wireless mesh network using technology developed by Dust Networks®, the leader in industrial wireless networking. This implementation is designed for the most demanding industrial applications, in harsh environments where packet delivery is critical.
These wireless sensors are installed at CRAC/CRAH supply and return as well as rack inlets to determine the zone of influence and the impact as air handlers are cycled down or turned off to optimize the cooling to the demand of the IT footprint.

Wireless Sensors
The SensiNetRack Sentry is a wireless temperature monitoring device and component of the SensiNet wireless sensor network. It reports highly accurate, real-time ambient level temperature measurements, without wires, and is FCC and CE-approved for license free operation worldwide. The Rack Sentry utilizes a solid state sensor in a unique configuration for ultimate installation flexibility. Individual sensors are “daisy chained” using standard CAT5 patch cables. Up to three sensors are supported as standard and these sensors can be added and or reconfigured in the field. The system simply recognizes the attached sensors and reports temperature with virtually no user configuration.

The Rack Sentry utilizes highly accurate, MEMS solid state sensors and a replaceable “C” size battery provides years of reliable operation. The SensiNet Services data acquisition Gateway is a powerful appliance providing network management, user interface, data logging, trending, alarming and communications without any complicated software to install. A standard browser and network connection is all that’s required to access and configure the system. The GWAY-1022 also operates as stand-alone data logger with real time views, trending and e-mail alerts.

With the various choices and solutions described above, it may help to discuss your requirements with a Data Center Solutions professional from PTS.

Sunday, March 17, 2013

Is Single Pane of Glass Overemphasized by the Data Center Infrastructure Management Industry?

I believe many seeking the "Holy Grail" of Data Center Management, a Single Pane of Glass to manage and monitor their Data Center and IT infrastructure are about as successful as the archeologists seeking the divine cup. I've seen many enterprise Data Centers come to the conclusion that they aren't ready for a Single Pane of Glass tool after sending out RFI's seeking such a tool. Is it realistic to think that an enterprise Data Center can get everything it needs to effectively monitor, manage and optimize on a Single Pane of Glass? Does this single pane then become such a crowded screen that the alerts and alarms become lost? Can a single pane be used to monitor, manage and optimize all of the assets and systems that are critical to the success of our Data Center's performance and availability?


Can a Single Pane of Glass Realistically
Manage your Data Center Infrastructure?

Where I think we first need to focus our attention in the evolution of Data Center monitoring and management is getting all data from systems that are discovering assets, monitoring system conditions and performance into a CMDB so all of our software tools can utilize this important data. IT, Facilities and executive management then are all using the same data to work as a team to address issues and optimize the Data Center and IT infrastructure's performance. Obtaining this data and verifying that this data is correct before it is entered into a CMDB is a huge challenge and few organizations have accomplished this feat. Many have failed in attempts to gather too much of this data manually. Organizations can typically expect a 10% error rate in manual data entry due to typing and transcribing errors. Can we afford to be making decisions about the capacity, performance and availability of our Data Centers with a 10% error rate? Before we can even think about Single Pane of Glass we have to implement a CMDB strategy that includes real-time data collection and accuracy validation.

I'd be interested in hearing where your organization is at the evolution of your Data Center & IT infrastructure management and whether you agree or disagree with my focal points for success.

Tuesday, February 19, 2013

PTS Exhibiting at 3rd Annual HITECH Symposium for Healthcare Related IT Solutions

PTS Data Center Solutions is exhibiting at the Third Annual Mid-Atlantic Crossing the Infrastructure & HITECH Meaningful Divide Symposium. 

The event is Entitled: “Patients, Care Givers, and Technology: Partners in Care" and will take place on March 21st and 22nd at the Radisson Valley Forge in Pennsylvania.

For those of you unfamiliar with HITECH, the HITECH Act was established with the primary goal of improving the population’s health and the quality and cost of healthcare. One particular focus area is in the ability to provide electronic medial records for patients to service providers anywhere in the world via proper, HIPAA-compliant, sharing of these records anywhere the patient may happen to be.

The symposium includes a series of seminars and presentations related to IT issues and problems experienced by IT professionals in the healthcare sector. In addition, there is an exhibit hall for vendors to present solutions targeting healthcare IT.

PTS has world class design, engineering, construction, and management staff across both facility and IT disciplines. This integrated data center facility and IT expertise affords PTS a unique vantage point for executing data center, computer room, and network operations center projects for both healtcare service providers and hospitals as well as many other market sectors. We can build, redesign, consolidate or relocate your computer room as well as provide many IT-related services and solutions:

  • Routing & Switching
  • Information / Network Security
  • Servers & Systems
  • Virtualization Technologies
  • Data Protection & Storage
  • Unified Communications
  • Microsoft Exchange & Active Directory
  • Application Development
  • Software Development
Learn more about the HITECH Symposium or Register for the Event.

Thursday, February 14, 2013

Data Center Energy Use

It can’t be denied that the amount of energy data centers consume is sickening and constantly growing on a daily basis, but the data centers themselves should not be held fully responsible for adhering to the demands of the consumer. Today’s society calls for 24x7x365 availability and the future for most companies lies in the hands of uninterrupted availability. For most data center technicians, their jobs rely on 99.99 percent availability and not saving on the electric bill. This fear of failure mixed with the high expectations of the end-user is what’s causing this massive surge of data center energy use.

James Glanz recently wrote a piece for the New York Times entitled, “Power, Pollution and the Internet.” Although his article lacks proof, it brings to light an important secret of the data center industry: data centers are gargantuan energy consumers. Personally, I think it was harsh for him to say corporations are wasting a good two-thirds of the energy they consume, because data centers for companies such as Facebook and YouTube need to be run around the clock.


Steve Dykes for The New York Times
INSURANCE A row of backup generators, inside white housings, lines the back exterior of the Facebook data center in Prineville, Ore. They are to ensure service even in the event of a power failure.
People don’t realize the vast amount of data it takes to allow them to watch a video on the internet through a website that is quite possibly hosting tens of millions of other users. Or how about that video game you’re playing on Facebook? And while we’re at it, how about your entire Facebook profile? All that data is stored for you in one of Facebook’s many data centers. They need to keep it accessible for you so you can play at anytime, anywhere.

So, who’s at fault? Can the answer be no one? We either need to accept the fact that data centers need the energy to meet the demands of the consumer or we, as consumers, must be patient and lower our expectations, but let’s face it, in the words of the mighty Queen, “I want it all, I want it all, I want it all, and I want it NOW!”

In the end, aside from risking potential downtime by reducing data center redundancies or powering down servers when not in use, data center operators can look to energy efficiency improvements aimed at avoiding increased risk of downtime. PTS Data Center Solutions performs Data Center Energy Efficiency Assessments on behalf of utilities and data center operators. However, reducing the number of data centers and their sizable energy consumption is not going to happen in the near future.

Friday, February 01, 2013

PTS Plays Role in Conservation by Building New Data Center for the World Wildlife Fund


PTS Data Center Solutions recently performed an assessment of the World Wildlife Fund’s (WWF) data center at its headquarters in Washington, D.C. That’s a pretty big deal considering WWF is the world’s leading conservation organization with total operating revenue of over $230 million. WWF networks through 100 countries with over 5 million members, so its data center is a very important part of overall conservation operations.

PTS was able to detect a critical problem with WWF’s data center environment. The data center was experiencing an increase in heat and the Computer Room Air Conditioning units weren’t getting the job done. WWF was in dire need of new power and cooling solutions. IT infrastructure availability and energy efficiency were also vital concerns as they are with all of PTS’ clients.

At first, PTS was considering renovating WWF’s aging infrastructure, but when the tenant on WWF’s first floor moved out, PTS determined an entirely new data center in that space would best suit WWF. PTS was tasked with design, construction management, procuring  equipment, overseeing installation, commissioning, and post construction services. WWF received a dynamic cooling solution which gives the data center the energy efficiency that was desired. PTS also installed a 100 KVA UPS which gave WWF critical power protection.

“The use of modular systems is an excellent strategy to address growth without major disruptions”, said Michael Petrino, PTS Vice President. “WWF is now operating a reliable, energy efficient data center. With the new, energy efficient cooling solution in place, the WWF data center is able to conserve significant amounts of energy and allow the WWF to practice internally its mission of conservation of natural resources.”

To read more about PTS’ success with WWF click here or contact us for a copy of the case study and the Press Release.

Friday, January 18, 2013

PTS Data Center Solutions Completes Planar Digital Signage Deployment in NYC

PTS Data Center Solutions recently designed and deployed the Planar Clarity™ Matrix LCD Video Wall digital signage solution for Prudential Douglas Elliman. The display is located on Broadway, between 9th and 10th Streets in New York City.

The Clarity™ Matrix LCD Video Wall System delivers the ultimate display solution for digital signage applications. Optimized for uninterrupted 24/7 operation, Clarity™ Matrix is an ultra-thin bezel LCD media wall system that delivers outstanding visual performance, supports extended operation and requires minimal installation space.


Contact PTS to learn more about digital signage solutions as well as digital display solutions for Network Operations Centers.

Wednesday, January 02, 2013

Event Follow-up: Is Your Disaster Recovery Approach a Disaster?

PTS Data Center Solutions, in conjunction with Quorum, hosted a particularly relevant event on December 4th. With over 20 industry executives and Backup & Disaster Recovery experts meeting at the Chart House in Weehawken, NJ, PTS and Quorum discussed the need for improved backup and disaster recovery solutions aimed at the Small- to Mid-size business sector.

"The event was originally scheduled for November 7th but we all know what had just taken place the week before - Hurricane Sandy", said Larry Davis, VP, IT Solutions Group for PTS. "If we could have only spread the word earlier and gotten the Quorum solution out to clients without a clear Disaster Recovery plan, the solution really works for a reasonable price."

Developed by Quorum engineers several years ago as a simple to deploy and use alternative to expensive redundant server, storage, and virtualization platform approaches, the Quorum solution has been a hit within market sectors ranging from:
  • Schools
  • Banks
  • Financial Services
  • Law Practices
  • Accounting Firms
  • Manufacturers
  • Municipalities
With premises-based appliances, cloud solutions available for offsite recovery, and archive systems for long term storage requirements, the Quorum onQ solution can be deployed rapidly without any other hardware or software needed.


At the event, Quorum engineers provided a live demonstration of a server failure and the One-Click Recovery™ inherent in the onQ solution's design:
  • Current Forever: Each ultra-efficient update is merged into the onQ device which houses virtual machine recovery nodes, full current images of client servers and virtual servers.
  • Ready-to-Run: The approach doesn't wait until you need to recover to build your virtual recovery nodes, allowing one-click recovery at any time.
  • Point-in-Time Recovery: Even though changes are merged into the ready-to-run recovery node, you can restore files or an entire system to a prior state. This is a perfect fit for business and organizations needing the ability to store and recover 7 years of data for regulatory purposes.
To learn more, visit PTS' website, watch the onQ video on the YouTube Data Center channel, or contact PTS at sales@ptsdcs.com