Friday, May 03, 2013
The awards program recognizes high-growth entrepreneurs who demonstrate excellence and extraordinary success in such areas as innovation, financial performance and personal commitment to their businesses and communities. These finalists were selected by a panel of independent judges. Award winners will be announced at a special gala event on Thursday, June 13, 2013 at the Hyatt New Brunswick.
Pete was surprised and excited to learn about being named a finalist for such a prestigious award. He's had the entrepreneurial bug for many years and has started or been a part of a founding team for five startups in the last 16 years.
Thursday, April 11, 2013
In a nutshell, planning and perspective are good critical as data center managers when it's time to complete a migration (or consolidation) of data center assets. Planning and perspective allow you to take a step back and make sure your approach holds water, allow you to check with peers in the industry for accepted best practices, and allow you to keep your job when the migration goes smoothly.
Critical Considerations in Preparation for a Data Center Migration include:
- Think About the Layout. Flow through a data center is critical to develop efficiencies. Flow includes power from utility through distribution to feeders to PDUs as well as battery backup and utility backup (generators) and is driven by a coherent data center design. In addition to power, think about network connectivity from the ingress at the street through to the network core. Also, how will data flow from core to distribution to access out to server/storage assets. A simple rule of thumb: Firewalls, DMZs, and network termination equipment should all be located close to the network entrance and/or network rack.
- Plan for Growth. It isn't enough to plan for growth within today's paradigm and technology. Rather, if at all possible, it's critical to consider the next two life cycles in technology. This means performing research on expected future rack power requirements as well as the data center key design criteria for today and 2-3 years into the future. Who would have thought 5 kW of redundant power at the rack may not be enough now if you're organization is planning to roll out blade server cabinets? Don't get caught having to migrate yet again.
- Plan the Cable Plant. Cabling architecture is the backbone of the data center network infrastructure. Careful planning and consideration is important when deciding on a data center cabling architecture. Key concerns are scalability, flexibility, manageability, availability, and total cost. Therefore, it is critical to plan in advance, leave space for core switches and future growth for the core and distribution switches and cable plan. Also, particularly if you are using a raised floor approach, deploy your cabinets, pull fiber to the cabinets, and run branch circuits for power. The incremental cost of the fiber and power cables waiting for use is minimal, you already have the labor onsite, and who wants an invasive change or upgrade several years down the road.
- Confirm the Asset Inventory. A data center migration gives you the opportunity to "clean out your attic". Like moving between homes, you shouldn't migrate or relocate assets that are decommissioned or not in the data center inventory list. Assets should be in the your Configuration Management Database (CMDB) including owner, department, business processes, applications, and dependencies. In fact, all data center assets should be tracked and maintained before the migration and after it takes place.
- Develop a Complete Relocation Plan. The final step in the data center migration is the relocation itself. Data Center relocations are expensive and require specific expertise and experience. Elements of a solid relocation plan include: Pre-planning and project management, pre-move site preparation, move plan creation, and post-move reviews.
Wednesday, April 03, 2013
Wireless sensor solutions can not only eliminate the resistance we might get in trying to get network infrastructure allocated for environmental monitoring, but wireless sensors provide the ability to quickly deploy, scale the solution and the flexibility to move sensors around for testing or as we are deploying new equipment. The wireless environmental market is growing quickly and here are some of the solutions PTS has evaluated and implemented. Some solutions just provide the monitoring while others provide analytical analysis and/or control software. We are interested in hearing from others on their experiences and why wireless monitoring has or has not worked in your data center.
Innovative patent-pending technology couples a series of 8 temperature sensors with an array of high intensity LEDs. This design provides the appearance of a “live” CFD by visually displaying a range of cool (blue) to hot (red) and a blend of 129 colors in between. The Aurora is the perfect self-policing, real-time troubleshooting tool to clearly identify potential cooling or heat-related issues in your racks. Aurora is extremely accurate because it measures air temperature, not surface temperature. The 3 User-Selectable Sensitivity Settings allow you to fine tune the monitored temperature range of the full 129-color spectrum. Because temperature is monitored over the entire height of the cabinet, Aurora is perfect for Aisle Containment and areas susceptible to temperature stratification. Optional Wireless Communication and Management Interface enables the temperature readings for all 8 sensors per strip to be captured for trending and alerting purposes.
Uses a Wireless mesh network to monitor inline power meters, temperature, humidity and air pressure that the management of complex facilities require. The data collected from these sensors can be managed in a Cloud portal called EMX, or you can run Power Manager from Packet Power or you can just use a gateway for SNMP Connectivity and Modbus TCP/IP Connectivity link the wireless monitoring devices to your existing data center monitoring software.
Manufacturers RFID environmental tags, temperature, humidity, pressure and PDU tags that work with ServerTech, Geist, Raritan metered and switched PDU’s. They have RFID asset tags as well if you want to monitor where assets are even down to what U they are installed in your racks. Each 433.92MHz RFID reader can support up to 1,400 RFID tags and the reader can support communications to multiple wired or wireless networks to report the sensor information into various software packages or you can use Sensor Manager to collect information from all types of RF Code wire-free sensor tags. Sensor Manager organizes all sensor information according to sensor type as well as sensor location. All information collected by Sensor Manager can be viewed interactively in real-time via an easy to use web browser based console. All information can be accessed via customized table views as well as graphically via map views. All historical data can be easily organized into reports and graphs using the standard reporting and graphing capability as well as RF Code’s Advanced Reporting Module which utilizes the powerful open source BIRT reporting engine.
Uses a Wireless mesh network to monitor:
- Server inlet temperatures
- Delta T across CRAC units
- Humidity and calculate dew points
- Subfloor pressure differentials
Vigilent energy management systems are built upon a sophisticated, wireless mesh network using technology developed by Dust Networks®, the leader in industrial wireless networking. This implementation is designed for the most demanding industrial applications, in harsh environments where packet delivery is critical.
These wireless sensors are installed at CRAC/CRAH supply and return as well as rack inlets to determine the zone of influence and the impact as air handlers are cycled down or turned off to optimize the cooling to the demand of the IT footprint.
The SensiNetRack Sentry is a wireless temperature monitoring device and component of the SensiNet wireless sensor network. It reports highly accurate, real-time ambient level temperature measurements, without wires, and is FCC and CE-approved for license free operation worldwide. The Rack Sentry utilizes a solid state sensor in a unique configuration for ultimate installation flexibility. Individual sensors are “daisy chained” using standard CAT5 patch cables. Up to three sensors are supported as standard and these sensors can be added and or reconfigured in the field. The system simply recognizes the attached sensors and reports temperature with virtually no user configuration.
The Rack Sentry utilizes highly accurate, MEMS solid state sensors and a replaceable “C” size battery provides years of reliable operation. The SensiNet Services data acquisition Gateway is a powerful appliance providing network management, user interface, data logging, trending, alarming and communications without any complicated software to install. A standard browser and network connection is all that’s required to access and configure the system. The GWAY-1022 also operates as stand-alone data logger with real time views, trending and e-mail alerts.
With the various choices and solutions described above, it may help to discuss your requirements with a Data Center Solutions professional from PTS.
Sunday, March 17, 2013
I believe many seeking the "Holy Grail" of Data Center Management, a Single Pane of Glass to manage and monitor their Data Center and IT infrastructure are about as successful as the archeologists seeking the divine cup. I've seen many enterprise Data Centers come to the conclusion that they aren't ready for a Single Pane of Glass tool after sending out RFI's seeking such a tool. Is it realistic to think that an enterprise Data Center can get everything it needs to effectively monitor, manage and optimize on a Single Pane of Glass? Does this single pane then become such a crowded screen that the alerts and alarms become lost? Can a single pane be used to monitor, manage and optimize all of the assets and systems that are critical to the success of our Data Center's performance and availability?
Where I think we first need to focus our attention in the evolution of Data Center monitoring and management is getting all data from systems that are discovering assets, monitoring system conditions and performance into a CMDB so all of our software tools can utilize this important data. IT, Facilities and executive management then are all using the same data to work as a team to address issues and optimize the Data Center and IT infrastructure's performance. Obtaining this data and verifying that this data is correct before it is entered into a CMDB is a huge challenge and few organizations have accomplished this feat. Many have failed in attempts to gather too much of this data manually. Organizations can typically expect a 10% error rate in manual data entry due to typing and transcribing errors. Can we afford to be making decisions about the capacity, performance and availability of our Data Centers with a 10% error rate? Before we can even think about Single Pane of Glass we have to implement a CMDB strategy that includes real-time data collection and accuracy validation.
I'd be interested in hearing where your organization is at the evolution of your Data Center & IT infrastructure management and whether you agree or disagree with my focal points for success.
Tuesday, February 19, 2013
PTS Data Center Solutions is exhibiting at the Third Annual Mid-Atlantic Crossing the Infrastructure & HITECH Meaningful Divide Symposium.
The event is Entitled: “Patients, Care Givers, and Technology: Partners in Care" and will take place on March 21st and 22nd at the Radisson Valley Forge in Pennsylvania.
For those of you unfamiliar with HITECH, the HITECH Act was established with the primary goal of improving the population’s health and the quality and cost of healthcare. One particular focus area is in the ability to provide electronic medial records for patients to service providers anywhere in the world via proper, HIPAA-compliant, sharing of these records anywhere the patient may happen to be.
The symposium includes a series of seminars and presentations related to IT issues and problems experienced by IT professionals in the healthcare sector. In addition, there is an exhibit hall for vendors to present solutions targeting healthcare IT.
PTS has world class design, engineering, construction, and management staff across both facility and IT disciplines. This integrated data center facility and IT expertise affords PTS a unique vantage point for executing data center, computer room, and network operations center projects for both healtcare service providers and hospitals as well as many other market sectors. We can build, redesign, consolidate or relocate your computer room as well as provide many IT-related services and solutions:
Thursday, February 14, 2013
James Glanz recently wrote a piece for the New York Times entitled, “Power, Pollution and the Internet.” Although his article lacks proof, it brings to light an important secret of the data center industry: data centers are gargantuan energy consumers. Personally, I think it was harsh for him to say corporations are wasting a good two-thirds of the energy they consume, because data centers for companies such as Facebook and YouTube need to be run around the clock.
Steve Dykes for The New York Times
So, who’s at fault? Can the answer be no one? We either need to accept the fact that data centers need the energy to meet the demands of the consumer or we, as consumers, must be patient and lower our expectations, but let’s face it, in the words of the mighty Queen, “I want it all, I want it all, I want it all, and I want it NOW!”
In the end, aside from risking potential downtime by reducing data center redundancies or powering down servers when not in use, data center operators can look to energy efficiency improvements aimed at avoiding increased risk of downtime. PTS Data Center Solutions performs Data Center Energy Efficiency Assessments on behalf of utilities and data center operators. However, reducing the number of data centers and their sizable energy consumption is not going to happen in the near future.
Friday, February 01, 2013
PTS Data Center Solutions recently performed an assessment of the World Wildlife Fund’s (WWF) data center at its headquarters in Washington, D.C. That’s a pretty big deal considering WWF is the world’s leading conservation organization with total operating revenue of over $230 million. WWF networks through 100 countries with over 5 million members, so its data center is a very important part of overall conservation operations.
PTS was able to detect a critical problem with WWF’s data center environment. The data center was experiencing an increase in heat and the Computer Room Air Conditioning units weren’t getting the job done. WWF was in dire need of new power and cooling solutions. IT infrastructure availability and energy efficiency were also vital concerns as they are with all of PTS’ clients.
“The use of modular systems is an excellent strategy to address growth without major disruptions”, said Michael Petrino, PTS Vice President. “WWF is now operating a reliable, energy efficient data center. With the new, energy efficient cooling solution in place, the WWF data center is able to conserve significant amounts of energy and allow the WWF to practice internally its mission of conservation of natural resources.”
To read more about PTS’ success with WWF click here or contact us for a copy of the case study and the Press Release.
Friday, January 18, 2013
The Clarity™ Matrix LCD Video Wall System delivers the ultimate display solution for digital signage applications. Optimized for uninterrupted 24/7 operation, Clarity™ Matrix is an ultra-thin bezel LCD media wall system that delivers outstanding visual performance, supports extended operation and requires minimal installation space.
Contact PTS to learn more about digital signage solutions as well as digital display solutions for Network Operations Centers.
Wednesday, January 02, 2013
"The event was originally scheduled for November 7th but we all know what had just taken place the week before - Hurricane Sandy", said Larry Davis, VP, IT Solutions Group for PTS. "If we could have only spread the word earlier and gotten the Quorum solution out to clients without a clear Disaster Recovery plan, the solution really works for a reasonable price."
Developed by Quorum engineers several years ago as a simple to deploy and use alternative to expensive redundant server, storage, and virtualization platform approaches, the Quorum solution has been a hit within market sectors ranging from:
- Financial Services
- Law Practices
- Accounting Firms
At the event, Quorum engineers provided a live demonstration of a server failure and the One-Click Recovery™ inherent in the onQ solution's design:
- Current Forever: Each ultra-efficient update is merged into the onQ device which houses virtual machine recovery nodes, full current images of client servers and virtual servers.
- Ready-to-Run: The approach doesn't wait until you need to recover to build your virtual recovery nodes, allowing one-click recovery at any time.
- Point-in-Time Recovery: Even though changes are merged into the ready-to-run recovery node, you can restore files or an entire system to a prior state. This is a perfect fit for business and organizations needing the ability to store and recover 7 years of data for regulatory purposes.
Wednesday, December 19, 2012
The first panel speaker, Peter Sacco, President of PTS Data Center Solutions, provided a solid overview of the DCIM sector, functional areas, and challenges faced by both manufacturers and end clients. He then put manufacturers on notice. Mr. Sacco stated there are 100+ companies producing hardware, software, and/or platforms for DCIM. The problem is that typically each company’s offering does one or two of the functional requirements well, others less well, and others not at all. Worse, little effort is made to work with one another although that is becoming less so as providers are realizing their own limitations.
As such, what data center managers really seek for DCIM, easy access to meaningful data that seamlessly correlates to actionable plans, has yet to be realized. In support of this supposition, Pete mentioned the Uptime Institute’s 2010 paper Data Center Infrastructure Management: Consolidation, But Not Yet which notes the market for data center infrastructure management systems will grow from $500 million in 2010 to $7.5 billion by 2020. So far, this hyper growth hasn't materialized as the holy grail of DCIM has been stunted by under powered solutions or solutions that are difficult to deploy.
The remainder of the DCIM panel discussion centered upon manufacturer and user challenges, new developments within the industry, and future directions as panelists compared existing solutions and viability of current deployments.
Beyond the DCIM panel, a second panel discussion focused on Lessons Learned from the aftermath of Hurricane Sandy. Various disaster recovery approaches, processes, and solutions were debated by the panelists. The event also included exhibits with lively discussions around many current hot topics in the data center community.
To learn more about Mr. Sacco's perspectives on DCIM, contact him via email, or download Pete's latest white paper Data Center Infrastructure Management - The Updated Elephant which provides a detailed review of the market for DCIM solutions. Additional DCIM solutions are available on the PTS website. More information on the Data Center Summit is available at Data Center Knowledge.