Monday, July 20, 2009

LinkedIn Discussion on Eliminating the Battery String

Thanks to everyone who’s participated in our Computer Room Design networking group at LinkedIn.com so far! We’re off to a great start, with over 200+ members joining in the first two weeks. I’d like to share highlights from one of our recent discussions…

Kevin Woods, Director of Business Development and Sales at i2i, asked:

Eliminating the Battery String? Does anyone have experience/opinion on the viability of the UPS/CPS systems? They incorporate a flywheel in between the generator and engine and in cases of power interruption, the flywheel uses kinetic energy to power the generator for up to 30 seconds while the engine is engaged.


ANSWERS:

• Mark Schwedel, business partner at EMC and advisor for Green Rack Systems, recommended taking a look at the patent for an improved UPS/CPS system, which employs a high-efficiency uninterrupted power supply function integrated with an engine-generator set that combines both short term protection against momentary power interruptions with longer term power generation.

• Gordon Lane, Facilities Coordinator at Petro Canada, shared his experience:
Not a direct comparison to gen/engine set up but I have a flywheel UPS system that has been in service for 23 years. Very reliable, change the bearings every 50000 hours - about 6 years - and we have just about completed a program of taking the MGs out for cleaning and re-insulation.

Obviously coming to end of life, 20 yrs was estimated life, but the serviceability has been phenomenal.

Certainly looking to replace with a similar system and I believe Caterpillar has a flywheel UPS solution that they integrate into their diesel offerings.

• Jason Schafer, Senior Analyst at Tier1 Research, explained in part:
My personal issue with flywheel solutions, aside from the reliability that both sides will argue, is that 30 seconds simply isn't enough time when you are talking about the criticality most datacenters need. The most common argument relates to allowing time to manually start a generator; and flywheel advocates will say "if a generator doesn't start in 30 seconds it's not very likely that it's going to start in 20 minutes" - I disagree with this. I've seen, on more than one occasion, where generator maintenance was being performed and through human error the EPO switch on the generator was mistakenly left pushed in. There's no way anyone is going to identify the problem and fix it in 30 seconds - I'd be surprised if anyone even got to the generator house in 30 seconds after a power outage. Minutes, however, are a different story.

I'm not saying that flywheels and CPSs don't have their place - I think they do, or rather will in large scale in datacenters, but we're not quite there yet. When virtualization plays a part in the redundancy and fault tolerance of a datacenter, where ride-through in the event of a power outage is more of a convenience than a necessity (a-la Google's datacenters - they can lose an entire facility and continue on for the most part), you'll see flywheels gain more traction.


What are your thoughts on the viability of the UPS/CPS systems? Please share your experience by posting a comment here, or by continuing the discussion in the Computer Room Design Group on LinkedIn.

Thursday, July 16, 2009

Introducing PTS’ Data Center Education Series

How extensive is your knowledge about all aspects of your data center? With our newly launched Data Center Education Series, you will never look at your IT and support infrastructure the same way again.

PTS’ Data Center Education Series will help you better assess problems in your data center by providing you with substantive knowledge that you can take back to your data center to improve operations, availability, and efficiency - ultimately reducing operating cost and improving service delivery to your users.

The education series provides students with comprehensive, vendor-neutral, module based training led by the data center design experts from PTS. We discuss the most pertinent topics in the data center industry, tying in case studies and real world situations to provide the knowledge you need to understand, operate, manage, and improve your data center.

The Standard Training Series is a three (3) day class held multiple times per year at major cities across the United States, Canada, and Europe. Our next session will take place in Midtown NYC from September 15-17th -- visit our site to view the agenda. Can’t make it to NYC? We'll also be coming to Chicago (October 21-23) and Dallas (December 7-9). I encourage you to reserve your seat today, as space is limited.

The education series will cover the following topics:

• Fundamentals of Data Center Cooling
• Fundamentals of Data Center Management
• Fundamentals of Physical Security
• Fundamentals of Fire Protection
• Fundamentals of Data Center Power
• Fundamentals of Data Center Maintenance
• Fundamentals of Designing a Floor Plan
• Fundamentals of Data Center Cabling
• Fundamentals of Energy Efficiency

Priced at only $1,795 per student, the training includes all course materials in addition to a continental breakfast and lunch each day. Additionally if you attend with other colleagues from work, you'll all receive a 10% discount. You'll realize an ROI quickly from this invaluable and intimate knowledge, in which straight from data center experts in this in-depth, intimate training series.

Data Center Education Series – Customized for your needs!

We also offer education programs customized to your IT team’s needs. If you have a large group and need training, we can come to you and present those topics of most interest to you! Choose your desired location (typically your own facility). Choose the topics you want to see, including any or all of the available topics from the standard 3-day training class.

In addition, if you have a topic in mind you don't see currently listed in our offerings, we'll build it for you for only a nominal fee to cover time and material costs.

The Customized Training Series is priced at $15,000 for 2 days or $20,000 for 3 days plus travel expenses. In addition to the training, you have option to purchase a one-day data center site assessment for $5,000. This assessment will be performed prior to the training in order to allow the training to address issues found in the assessment.

Please join us on LinkedIn & Twitter

PTS is excited to provide our peers with a new online forum in which to discuss the planning, design, engineering, and construction of data centers and computer rooms.

If you’ve been reading our blog for a while, you may already be aware of our Facebook Page at https://www.facebook.com/PTSDataCenter. (A big ‘thank you’ to everyone who’s added themselves as fans!) Today, I’m happy to announce that PTS is further expanding our online presence with the goal of facilitating the open exchange of ideas among small-to-medium sized data center and computer room operators.

At the forefront of this effort is the newly created Computer Room Design networking group on LinkedIn.com. You can check it out by visiting http://www.linkedin.com/groups?gid=2099901. Hosted by the consultants and engineers at PTS Data Center Solutions, the group is an open forum in which professionals can share industry-related news, ideas, issues and experiences.

Membership is free and open to all professionals and vendors in the computer room and data center industry. We hope that industry leaders will look at this as an opportunity to share knowledge, discover new services and opportunities, and expand their networks.

So far, our networking group on LinkedIn.com has attracted broad interest, gaining more than 100 members in the first week alone. Featured discussions include best practices for consolidation strategies, how to combat downtime in the data center, and industry concerns regarding the Power Usage Effectiveness (PUE) metric.

This thought leadership is further supported on PTS’ Twitter profile (http://twitter.com/ptsdatacenter) which features the latest industry news, highlights from the LinkedIn networking group, and insights from our engineers. If you’re on Twitter, please send us a message and we’ll be sure to follow you back!

Monday, June 29, 2009

Energy Efficiency Remains Priority In Spite of Economic Troubles

In lean times, data centers are learning to do more with less. The Aperture Research Institute of Emerson Network Power just released the results of a study showing that, despite the global economic downturn, energy-efficiency is still a top-of-mind objective for many data centers. In fact, data center managers are concentrating on resolving efficiency issues as a way to balance increasing demand for IT services with stagnant budgets.

The report reveals that:

Data center managers will look at ways to squeeze more from their existing resources, with 80 percent of those surveyed saying they can create at least 10 percent additional capacity through better management of existing assets. Thirty percent of those surveyed said they could find an additional 20 percent. There is likely to be a revitalized focus on tools that provide insight into resource allocation and use.

Data centers will also look to green initiatives to help manage their operating expenses, with 87 percent of those surveyed having a green initiative in place and the majority expecting to continue or intensify these efforts.


The survey data also suggests that the downturn will have "little effect on the demand for IT services" – a positive indicator for economic recovery. I recommend downloading the full Research Note as a PDF at the Aperture Research Institute’s website. It’s an interesting read.

Wednesday, June 24, 2009

Investing in Energy-Efficient Equipment

In "Taking Control of the Power Bill", Bruce Gain takes a look at how many data center admins are retooling their IT infrastructures’ power needs to accommodate growth and slash costs. He notes that although many admins struggle with having to pay additional costs associated with switching to more eco-efficient server room cooling, airflow designs, and other related equipment, paying for more expensive yet efficient equipment is a smart investment when you look at the big picture.

In order to justify that investment, admins should calculate the ROI offered by different scenarios. By creating models to outline the costs of ownership for different configurations and doing a full costs-benefits analysis, you can ease the decision making process. Once you begin making the switch to a more energy-efficient approach, it’s recommended that your organization phase in new equipment as part of the natural growth and evolution of your IT systems.

Michael Petrino, vice president of PTS, also offers his thoughts on the subject, providing a concrete example of cheaper yet less efficient components vs. more power-efficient but costly alternatives. I encourage you to check out the full article in Vol.31, Issue 17 of PROCESSOR.

Tuesday, June 09, 2009

Drug Companies Put Cloud Computing to the Test

Traditionally characterized as "late adopters" when it comes to their use of information technology (IT), major pharmaceutical companies are now setting their sights on cloud computing.

Rick Mullin at Chemical & Engineering News (C&EN) explores how Pfizer, Eli Lilly & Co., Johnson & Johnson, Genentech and other big drug firms are now starting to push data storage and processing onto the Internet to be managed for them by companies such as Amazon, Google, and Microsoft on computers in undisclosed locations. In the cover story, “The New Computing Pioneers”, Mullin explains:

“The advantages of cloud computing to drug companies include storage of large amounts of data as well as lower cost, faster processing of those data. Users are able to employ almost any type of Web-based computing application. Researchers at the Biotechnology & Bioengineering Center at the Medical College of Wisconsin, for example, recently published a paper on the viability of using Amazon's cloud-computing service for low-cost, scalable proteomics data processing in the Journal of Proteome Research (DOI: 10.1021/pr800970z).”


While the savings in terms of cost and time are significant (particularly in terms of accelerated research), this is still new territory. Data security and a lack of standards for distributed storage and processing are issues when you consider the amount of sensitive data that the pharmaceutical sector must manage. Drug makers are left to decide whether it’s smarter to build the necessary infrastructure in-house or to shift their increasing computing burdens to the cloud.

Friday, June 05, 2009

Data Center Professionals Network

The other day I stumbled across the Data Center Professionals Network, a free online community for professionals from around the world who represent a cross section of the industry. Members include data center executives, engineering specialists, equipment suppliers, training companies, real-estate and building companies, colocation and wholesale businesses, and industry analysts. The recently launched networking site enables key players in the industry to easily connect, interact, and develop business opportunities.

According to Maike Mehlert, Director of Communications:

The Data Center Professionals Network has been set up to be a facilitator for doing business. It acts as a one-stop-shop for all aspects of the data center industry, from large corporations looking for co-location or real estate, or data centers looking for equipment suppliers or services, to engineers looking for advice or training.


Features of the social network include a personalized user profile, as well as access to job boards, business directories, press releases, classified ads, white papers, photos, videos and events.

I haven’t had a chance to join yet but if you want to check it out, visit http://www.datacenterprofessionals.net/ (you can sign in using a Ning ID if you already have one). If you do visit the site, post a comment and let me know what you think.

Wednesday, May 20, 2009

How Big Should Large Screen Displays Be In Your Command & Control Room?

Many A/V planners are challenged with how big large screen displays should be in their command & control rooms. There are actually some fairly complicated calculations that can be done which will help you determine what the minimum character size (sometimes referred to as 'x' size) should be under a given circumstance. This 'x-size' is defined as being the height of the smallest coherent element within the presented material. Think of this in terms of a lower-case letter x.

This lower case 'x' - which really is the same height as the smallest of the lower case letters - should subtend not less than 10 arc minutes on a viewers retina to be recognized at any viewing distance. This becomes more complicated when viewers are located off axis to the center of the screen as this requires a larger subtended angle and there is some affect as a result of colored symbols, amount of time the image is on screen, etc. As you can imagine... if you were sizing a screen for projection of a spreadsheet in order to go order to review your Data Center metrics you might want to use these calculations (which can be found in this ICIA publication) http://www.infocomm.org/cps/rde/xchg/infocomm/hs.xsl/9229.htm

Or a good free presentation on this subject can be found at: http://www.educause.edu/Resources/DesignStandardsandPracticesfor/155327

Tuesday, May 05, 2009

Should New York Stock Exchange be hiding the location of its new Data Center?

I find it interesting that major financial institutions & government agencies attempt to hide the locations of their Data Centers. How effective can this non-disclosure aspect of security really be in today's media frenzy world? Obviously not too effective since NYSE's new Data Center build is already being talked about in Data Center Knowledge & The Bergen Record.

http://www.datacenterknowledge.com/archives/2009/05/04/financial-data-center-hiding-in-plain-sight/comment-page-1/#comment-3718

Even if details of this site location go unpublished, word from employees & vendors who support this site certainly will spread. I'm not saying that we should broadcast in neon lights the location of this Data Center, but if a new Data Center is constructed covering all 4 disciplines of security {Physical, Operational, Logical & Structural} POLS, will it matter if the public knows where the Data Center is if the security is thoroughly covered. It isn’t likely that the NYSE can really hide the whereabouts of its ~400K square foot Data Center anyway. Most Data Center designers cover Physical & Logical security systems thoroughly as those disciplines are maturing. What is often not covered thoroughly is the Structural Security, organizations become too focused on getting a CO and getting the new Data Center live that they often don’t cover themselves from the structural threats of fire, water, theft & wind.

How many Data Centers are built with a 20 minute fire rated door? How many Data Centers are built with more than a 10-15 minute Class 125 rating? The real interesting aspect to this point is that there are new building materials that can cover Structural Security and omit these unnecessary exposures while actually constructing the facility & obtaining the CO faster.

Thursday, April 30, 2009

Free Data Center Assessment Tools from the Department of Energy.

It certainly shows where we are at in this country when the Government is creating free tools to help us access our efficiency and giving us guidance on how to improve our Data Center Efficiency. What choice does the DoE have with the rising demand for power from our Data Centers expected to be 10% of the total US demand for power by 2011 while we have a growing need to reduce our carbon footprint & demand on fossil fuels.

In my opinion, a couple areas of caution are warranted in the use of these free tools. First the tool is free, but you still have to have the means to collect the data to enter into the tool, details about the power consumption of your equipment & whether the equipment can be controlled, utility bills, temperature readings at rack inlet & on supply return, airflow readings, etc. The presentation & guidance suggests that we can use air side & water side economizers, decrease our airflow, raise our water temperature & set points for supply side air without even discussing the impacts this could have on availability? The guidance for use of the tools discusses the use of thermography or CFD, but treats it as a suggested option in our analysis of improving DCiE while we are raising temperatures & decreasing airflow. These tools do present value & they are free. I just wish our Government would have stressed the tools limitations & cautioned users on other considerations that must be factored, such as the availability requirements of your Data Center.

http://www1.eere.energy.gov/industry/saveenergynow/partnering_data_centers.html

Wednesday, April 15, 2009

How important is it to consider the Grid for my back-up data center & DR Plan?

It has been several years since the August 2003 Blackout, but I can't help thinking that we are all being lulled to sleep on the next major grid issue. There are only 3 main power grids in the US so if I have my primary Data Center on the Eastern Interconnect then should my DR requirement be to locate my back-up site in TX on the ERCOT Grid or in the west on the WSCC Grid. Or is there any benefit to locating on a different NERC region in which case there are 10 regions in the US. Can that benefit equivalent to being on a separate grid? I would doubt it since the 2003 Blackout crossed multiple NERC regions in the Eastern Grid.

http://www.eere.energy.gov/de/us_power_grids.html

Should I not be concerned with this & just choose a site or build a site with a higher level of redundant & back-up power? Is it more important to have the DR site in a location easily accessible for our technical experts than it is to have it on a different grid? Remember 911 grounded flights so if we had another event of that magnitude it would take days for my technical experts to get to our DR site if they could at all. Of course we can introduce many tools for full remote control & power control where access to our physical environment becomes less important so should I make it best practice to get that DR site on a separate grid? If I put locating my DR site into my key design criteria where should it fall on my priority list?

Monday, April 13, 2009

Going Green with Data Center Storage

Just saw an interesting article in Enterprise Strategies about the use of magnetic tape as an energy-efficient storage solution. In “Tape’s Role in the Green Data Center,” Mark Ferelli discusses how tape technology is making a comeback by helping to keep the data center green as utility bills rise. He explains:

The efficient use of disk can help with data center greening when a user reads and writes to the densest possible disk array to ensure capacity is maximized and more disk is not bought unnecessarily.

In archiving, on the other hand, the greenest option is tape, which uses less power and produces a lower heat output. This not only eases the bite of the utility bill but places less strain on HVAC systems. In contrast, the case can be made that using disk for archiving does more harm since disks that spin constantly use much more power and generate more heat.


Ferelli also takes a look at alternative power and cooling solutions, such as MAID (Massive Array of Idle Disks) storage arrays, in comparison with tape-based storage.

What’s been your experience with energy-efficient storage technology? Do tape-based systems offer better power savings versus disk-based solutions?

Friday, April 03, 2009

Google Unveils Server with Built-in Battery Design

For the first time on Wednesday, Google opened up about the innovative design of its custom-built servers.

The timing of the reveal, which coincided with April Fool’s Day, left some wondering if the earth shattering news was a prank. If it sounds too good to be true, it probably is, right? Not so in this case. In the interest of furthering energy efficiency in the industry, Google divulged that each of its servers has a built-in battery design. This means that, rather than relying on uninterruptible power supplies (UPS) for backup power, each of Google's servers has its own 12-volt battery. The server-mounted batteries have proven to be cheaper than conventional UPS and provide greater efficiency.

Google offered additional insights into its server architecture, its advancements in the area of energy efficiency, and the company’s use of modular data centers. For the full details, I recommend reading Stephen Shankland’s coverage of the event at CNET News. It’s fascinating stuff. Plus, Google plans to launch a site in a few days with more info.

Thursday, April 02, 2009

Can The Container Approach Fit Your Data Center Plans?

Conventional Data Center Facilities have now had a long history of difficulties in keeping up with the increasing demands of new server & network hardware so organizations are now looking for solutions that upgrade the facility with the technology upgrade, rather than continuing to invest millions in engineering & construction upgrades to support higher densities, the expense of having to build or move to new facilities that can handle these densities. Containers offer a repeatable standard building block. Technology has long advanced faster than facilities architecture and containerized solutions at least levels a large portion of the facility advance to the technology advance.

So why haven't we all already moved into Containerized Data Center Facilities and why are so many new facilities underway that have no plans for containers? Hold on Google just revealed for the first time that since 2005, its data centers have been composed of standard shipping containers--each with 1,160 servers and a power consumption that can reach 250 kilowatts. 1st Google showed us all how to better use the internet, now have they shown us all how to build an efficient server & Data Center? The container reduces the real estate cost substantially, but the kW cost only marginally, Google really focused its attention on efficiency savings at the server level, bravo! The weak link in every data center project will always remain the ability of the site to provide adequate redundant capacity emergency power & heat rejection. These issues do not go away in the container ideology. In fact, it could be argued that the net project cost in the container model could be greater since the UPS's & CRAC units often are located within the container, which will cause the overall count of them to be greater. Just as in any Data Center project rightsizing the utility power, support infrastructure & back up power to meet the short & long term goals of your key design criteria is the most important aspect to consider in any containerized project. What containers do accomplish is creating a repeatable standard & footprint for the IT load and how the power, air & communications are distributed to it. Organizations are spending billions of dollars planning & engineering those aspects in many cases to find out their solution is dated by the time they install their IT load. With containers when you upgrade your servers you are upgrading your power, air & communications simultaneously & keeping it aligned with your IT load.

What about the small & medium business market? Yes the containerized approach is a very viable alternative to a 100,000+ square foot conventional build, but what about the smaller applications? A container provides an all encompassing building block for technology & facility architecture but in a fairly large footprint. Not everyone has a need for 1400U's of space, 22,400 processing cores or the wherewithal to invest over $500K per modular component. Unless SMB's want to colocate or sign off to a managed service provider who is running their IT in a cloud in a new containerized Data Center, the container approach doesn't have a play for SMB or does it? There are certainly solutions in the market to help a SMB build their own smaller footprint high density movable enclosure or mini-container, it’s surprising there has been little focus on that much larger market. We are exploring some containerized approaches to the SMB market that would also address branch & division applications for large organizations where the container offerings today likely present too large a building block to be practical.

For more information about Containerized Data Centers & some of the methodologies for deployment I recommend Dennis Cronin's article in Mission Critical Magazine.

http://www.missioncriticalmagazine.com/CDA/Articles/Features/BNP_GUID_9-5-2006_A_10000000000000535271

And certainly the details on CNET about Google's Containers & Servers.

http://news.cnet.com/8301-1001_3-10209580-92.html

Wednesday, April 01, 2009

Data Center Power Drain [Video]

Click here to watch a recent news report from CBS5.com on what's being done to make San Francisco's data centers more energy efficient.

In the "On the Greenbeat" segment, reporter Jeffrey Schaub talks to Mark Breamfitt at PG&E and Miles Kelley at 365 Main about how utilities companies and the IT industry are working to reduce overall energy consumption. According to the report, each of 365 Main’s three Bay Area data centers uses as much power as a 150 story skyscraper, with 40 percent of that power used to cool the computers.

Wednesday, March 25, 2009

Spending on Data Centers to Increase in Coming Year

An independent survey of the U.S. data center industry commissioned by Digital Realty Trust indicates that spending on data centers will increase throughout 2009 and 2010.

Based on Web-based surveys of 300 IT decision makers at large corporations in North America, the study reveals that more than 80% of the surveyed companies are planning data center expansions in the next one to two years, with more than half of those companies planning to expand in two or more locations.

In addition, the surveyed companies plan to increase data center spending by an average of nearly 7% in the coming year. “This is a reflection of how companies view their datacenters as critical assets for increasing productivity while reducing costs," noted Chris Crosby, Senior Vice President of Digital Realty Trust.

To view the rest of the study findings, visit the Investor Relations section of DigitalRealtyTrust.com.

Thursday, March 19, 2009

Top 3 Data Center Trends for 2009

Enterprise Systems just published the “Top Three Data Center Trends for 2009” by Duncan Campbell, vice president of worldwide marketing for adaptive infrastructure at HP. In the article, Campbell discusses how companies need to get the most out of their technology assets and, in the coming year, data centers will be pressured to "maintain high levels of efficiency while managing costs". In addition, companies will need to make an up-front investment in their data center assets in order to meet complex business demands.

Campbell predicts:
  • “There will be no shortage of cost-cutting initiatives for enterprise technology this year.”
  • “As virtualization continues to enable technology organizations to bring new levels of efficiency to the data center, the line between clients, servers, networks and storage devices will continue to blur.”
  • “Blade offerings will continue to mature in 2009. Server, storage, and networking blades will continue to improve their energy efficiency and reduce data center footprints. Vendors are also now developing specialty blades, finely tuned to run a specific application.”

Efficiency, agility, and scalability will remain priorities for companies. By taking advantage of innovative data center technologies, companies can further reduce costs while increasing productivity – a goal that is of particular importance during challenging economic times.

Wednesday, March 11, 2009

It’s Nap Time for Data Centers

Yesterday at the International Conference on Architectural Support for Programming Languages and Operating Systems in Washington, D.C., researchers from the University of Michigan presented a paper, titled “PowerNap: Eliminating Server Idle Power”.

“One of the largest sources of energy-inefficiency is the substantial energy used by idle equipment that is powered on, but not performing useful work,” says Thomas Wenisch, assistant professor in the department of Electrical Engineering and Computer Science. In response to this problem, Wenisch's team has developed a technique to eliminate server idle-power waste.

Their paper addresses the energy efficiency of data center computer systems and outlines a plan for cutting data center energy consumption by as much as 75 percent. This would be accomplished through the concurrent use of PowerNap and the Redundant Array for Inexpensive Load Sharing (RAILS). PowerNap is an energy-conservation approach which would enable the entire system to transition rapidly between a high-performance active state and a near zero-power idle state in response to instantaneous load, essentially putting them to sleep as you would do with an ordinary laptop. RAILS is a power provisioning approach that provides high conversion efficiency across the entire range of PowerNap’s power demands.

The paper concludes:

PowerNap yields a striking reduction in average power relative to Blade of nearly 70% for Web 2.0 servers. Improving the power system with RAILS shaves another 26%. Our total power cost estimates demonstrate the true value of PowerNap with RAILS: our solution provides power cost reductions of nearly 80% for Web 2.0 servers and 70% for Enterprise IT.


To read the full text, please visit Wenisch’s site to download a PDF of the paper: http://www.eecs.umich.edu/~twenisch/?page=publications.php.

Monday, March 09, 2009

Finding the Silver Lining During an Economic Downturn

It seems, no matter which way you look these days, there’s more bad news. Job losses are up. The stock market is down. But not every business is focusing on the negative. In fact, there’s even a growing list of companies refusing to take part in the recession. As Jamie Turner at the 60 Second Marketer writes:

To be sure, times are tough. They’re downright B-A-D. But the world isn’t ending. The sky is not falling. In fact, you and your business will be here tomorrow and the next day — if you stop focusing on the negative and start focusing on the positive.


In light of this, I’d like to highlight one company who sees data center opportunity despite the poor economy: Juniper Networks. According to this article in Network World, Juniper has “launched an aggressive campaign to expand its enterprise business with a targeted assault on the data center.” They’ve announced a project, called Stratus, which their blog describes as an attempt to “create a single data center fabric with the flexibility and performance to scale to super data centers, while continuing to drive down the cost and complexity of managing the data center information infrastructure.”

And why announce Stratus now? Tom Nolle, president of consultancy CIMI Corp, explains: “Juniper cannot hope to match Cisco in breadth so it is making that an asset instead of a liability. Juniper is timing its success with Stratus to the economy's recovery and to developing symbioses with partners.”

That’s the kind of strategic, fighting spirit that helps a company come out on top, wouldn’t you say?

Friday, February 20, 2009

Improving Mobile Applications in the Enterprise

Look for Michael Petrino, vice president of PTS Data Center Solutions, in the latest issue of PROCESSOR (Vol.31, Issue 8).

In "Essential Mobile Tools: Maximize Your Mobile Toolset to Better Unlock Wireless’ Potential", Petrino shares his thoughts on the importance of establishing the right power infrastructure in order to improve the broadcast range of on-campus wireless connections.

The article discusses several easy-to-implement ways that enterprises can make better use of mobile applications so that they can support mobile employees without placing an unnecessary burden on the data center or IT support teams. It features insights from Robert Enderle, an analyst for the Enderle Group, and Joel Young, CTO and senior vice president of R&D at Digi International.

To read the full article, please visit PROCESSOR.com.

Tuesday, January 27, 2009

Acquisition of NTA’s Technology Consulting Assets

I’m pleased to announce that PTS has officially acquired critical components of Nassoura Technology Associates, LLC (NTA) including all of its technology consulting assets. If you are not already familiar with NTA, they were a leading technology consulting and engineering firm based in Warren, New Jersey who in-house developed the widely acclaimed software product, dcTrack3.0. Recently, Raritan, Inc. purchased NTA’s dcTrack3.0 product in a separate transaction.

NTA’s assets will enable us to expand our existing technology consulting service offerings including network, structured cabling, security, and audio/visual design. Furthermore, this acquisition enables us to enhance our existing library of technical drawings, specifications, and request for proposal (RFP) documentation. Also included in the acquisition was the transfer of documents for all NTA’s completed client projects across a broad spectrum of industries.

If you are a previous client of NTA, we will continue to maintain your design documents and provide you with the expert level of service you had become accustomed to as an NTA client. We are extremely excited to expand our customer base and to have this opportunity to improve our client deliverables by acquiring the assets of one of the most influential design firms serving the data center industry.

In addition to the acquisition of NTA’s technology consulting assets, we are also pleased to announce the addition of six (6) new employees to our growing family of data center experts. We are sure they will contribute substantially to PTS’ continued growth in 2009. The new employees include data center solutions professionals, Andrew Graham, Peter Graham, and Michael Piazza as well as architect, Michael Relton, and senior electrical engineer, Alex Polsky, P.E.

The latest new employee is data center software development and pioneer, Dave Cole. Dave has a storied history of developing software and hardware products for System Enhancement Corporation, later purchased by APC, and Hewlett-Packard. Most notably however, Dave founded and then sold his company, The Advantage Group, along with his industry leading data center support infrastructure device monitoring product to Aperture, later purchased by Emerson. Stay on the lookout for further announcements as to what Dave and I are up to.

Monday, January 19, 2009

Data Centers Understaffed and Underutilized?

The following news snippet from SearchStorage.com caught my eye and I couldn’t resist sharing it here:

Symantec Corp.'s State of the Data Center 2008 report paints a picture of understaffed data centers and underutilized storage systems.

The report, based on a survey of 1,600 enterprise data center managers and executives, found storage utilization at 50%. The survey also discovered that staffing remains a crucial issue, with 36% of respondents saying their firms are understaffed. Only 4% say they are overstaffed. Furthermore, 43% state that finding qualified applicants is a problem.

Really interesting numbers, particularly when it comes to staffing issues. With so many layoffs and other cutbacks happening, it’s not so surprising that firms feel understaffed. However, with the national unemployment rate reaching 7.2 percent for December, I don’t think finding qualified applicants will be as much of a problem in 2009. As for the underutilization of storage systems, this is a major contributor to high data center costs. If corporate budgets continue to get slashed, I can guarantee that virtualization is going to stay right at the top of most data center managers to-do lists for the foreseeable future.

(By the way, if you’re an unemployed techie, you might want to check out this article from CIO.com. Socialtext is offering its social networking tools free to laid-off workers who want to form alumni networks and share job leads.)