Showing posts sorted by relevance for query storage. Sort by date Show all posts
Showing posts sorted by relevance for query storage. Sort by date Show all posts

Monday, February 22, 2010

Considerations for Storage Consolidation

The growth of company files, e-mail, databases, and application data drives a constant need for more storage. But with many networks architected with storage directly attached to servers, growth means burdensome storage management and decreased asset utilization. Storage resources remain trapped behind individual servers, impeding data availability.

There are three storage consolidation architectures in common use today:
  • direct-attached storage (DAS),
  • network-attached storage (NAS), and
  • the storage area network (SAN).

DAS structures are traditional in which storage is tied directly to a server and only accessible at that server. In NAS, the hard drive that stores the data has its own network address. Files can be stored and retrieved rapidly because they do not compete with other computers for processor resources. The SAN is the most sophisticated architecture, and usually employs Fibre Channel technology, although iSCSI-based technology SANs are becoming more popular due to their cost effectiveness. SANs are noted for high throughput and their ability to provide centralized storage for numerous subscribers over a large geographic area. SANs support data sharing and data migration among servers.

So how do you choose between NAS, RAID and SAN architectures for Storage Consolidation? Once a particular approach has been decided, how do you decide which vendor solutions to consider? There are a number of factors involved in making a qualified decision including near and long term requirements, type of environment, data structures, budget, to name a few. PTS approaches Storage Consolidation by leveraging our proven consulting approach:
  • to gather information on client needs,
  • survey the current storage approach, and
  • assess future requirements against their needs and the current approach.

Critical areas for review and analysis include:
  • Ease of current data storage management
  • Time spent modifying disk space size at the server level
  • Storage capacity requirements to meet long term needs
  • Recoverability expectations in terms of Recovery Time Objectives and Recovery Point Objectives
  • Needed structuring of near- and off-line storage for survivability and ease of access to data
  • Security needed to maintain data storage integrity
  • Evolving storage complexity if current architecture is maintained
  • New applications considered for deployment
  • Requirement to provide Windows clustering
  • Interest in considering Thin Provisioning
  • Storage spending as a percentage of total IT budget
PTS reviews all of the items above, and more --- we then design the best storage architecture for both near and long term requirements and are able to source, install and manage leading edge storage solutions from companies such as Dell and Hitachi.

Ultimately, Storage Consolidation positively impacts costs associated with managing your IT network in terms of redundancy, disaster recovery, and network management. It also allows for a more secure network, free from wasted assets tied to particular servers or data center components. Finally, the tasks of provisioning, monitoring, reporting, and delivering the right storage services levels can be time consuming and complex and Storage Consolidation will enhance your ability to manage your organization's data storage.

Wednesday, July 28, 2010

Storage & Data Deduplication

PTS continues to build upon our Storage & Data Protection Consulting Services aimed at providing the right storage solution for our client's needs. Therefore, the IT-side of this month's Solutions Showcase provides an overview of our newest storage and data deduplication solutions.

Compellent Fluid Data Storage solutions and ExaGrid Systems Disk-Based Backup with Dedeplication solutions add additional depth to the PTS storage portfolio.

Compellent Fluid Data Storage

With a powerful data movement engine, intelligent software applications, and an open, agile hardware platform, Fluid Data storage is an enterprise-class solution which actively and intelligently manages data at a more granular level to cut cost, time, and risk.

Fluid Data storage dynamically moves enterprise data to the optimal tier based on actual use. The most active blocks reside on high-performance SSD, Fibre Channel, or SAS drives, while infrequently accessed data migrates to lower-cost, high-capacity SAS or SATA drives. The result is network storage that's always in tune with application needs, plus overall storage costs cut by up to 80%

The Fluid Data storage intelligent software applications enables enterprises of all sizes to move beyond simply storing data to actively, intelligently managing data. Powerful network storage software with built-in intelligence and automation optimizes the provisioning, placement and protection of data throughout its lifecycle.

Unlike network storage systems that require organizations to rip and replace hardware as business needs change, Compellent storage uses standards-based hardware and supports new technologies on a single, modular platform. Users can mix and match drive technologies such as SSD, FC, SAS and SATA, and utilize a range of interconnects - from FC to FCOE and iSCSI to 10GbE. Plus, fully redundant hardware components and advanced failover features ensure no single point of failure for high enterprise data availability.

To learn more about Fluid Data Storage, Click Here.

ExaGrid Systems Disk-Based Backup with Deduplication

ExaGrid's EX Series disk-based backup with deduplication revolutionizes how organizations backup and protect their data. By leveraging your current backup application and replacing tape in your nightly backup process, ExaGrid's simple, turnkey appliance can:
  • Reduce the disk space required by at least 10:1, and up to 50:1
  • Shorten your backup window by 30-90%, ensuring all of your data is fully protected
  • Improve your disaster recovery plan through off-site disk-based retention of your data
  • Reduce the amount of time your IT staff spends on managing backups
  • Scale easily and cost-effectively with your data growth
  • Fully protect your virtualized environment
  • Reduce other costs associated with tape-based backup

The six core ExaGrid disk-based backup appliances include GRID computing software which allows them to virtualize into one another when plugged into a switch. As a result, any of the six appliance models can be mixed and matched into a single GRID system with full backup capacities up to 100TB (6 PB logical). Once virtualized, they appear as a single pool of long-term capacity. Capacity load balancing of all data across servers is automatic, and multiple GRID systems can be combined for additional capacity. Even though data is load-balanced, deduplication occurs across the systems so that data migration does not cause a loss of effectiveness in deduplication.

ExaGrid's unique approach to scalability provides the following benefits:
  • Performance is maintained as your data grows - each additional ExaGrid server added to a system provides disk, processor, memory and GigE
  • Plug and play expansion - adding an additional ExaGrid server is as simple as plugging it in and letting ExaGrid's automatic virtualized GRID software do the rest
  • Cost-Effective and Flexible Solution with No "Forklift" Upgrades - no need to over-buy storage capacity up front - modular systems are easily combined in a virtualized GRID to smoothly scale up for larger capacities as needed with no painful "forklift" upgrades.
  • Capacity utilization is load-balanced across servers - as a single server reaches full utilization, it can leverage space available on other servers in the GRID

To learn more about the ExaGrid EX Series, Click Here.

Wednesday, July 14, 2010

PTS Data Center Solutions Expands its IT Solutions Portfolio with Compellent Fluid Data Storage and ExaGrid Systems Disk-Based Backup Solutions

PTS Data Center Solutions has launched a strategic distribution relationship with Compellent Technologies and ExaGrid Systems. The relationship includes the full breadth of products from both manufacturers targeted for midsized enterprises.

As a data center consulting and turn-key solutions provider, PTS provides integrated data center facility and IT technical expertise for clients. With a proven process for understanding and addressing client needs, as well as integrated facilities and IT experience & expertise, PTS has a unique vantage point for executing data center, computer room, and network operations center projects. PTS understands the impact IT architecture and network design approaches have on the underlying facility layer in terms of power, cooling, and space considerations and seeks out best-of-breed IT solutions that reduce facility load requirements.

"PTS is often approached by clients requesting support to improve data center efficiencies through energy efficiency analysis, computational fluid dynamic modeling, and virtualization assessments. By expanding our portfolio of storage, backup, and data deduplication solutions with two leading providers in Compellent and ExaGrid Systems, we are providing leading edge solutions with proven track records. Compellent's block-level storage management offers a more granular approach to automatically and actively manage data resulting in reduced data center costs, footprint, and energy consumption. ExaGrid System's scalable disk-based backup solutions reduce the total amount of disk space needed through backup compression and deduplication. Together with PTS' consulting services, we are able to significantly reduce overall data center operational expenses," said PTS President, Peter Sacco.

Compellent's Fluid Data architecture enables superior utilization and efficiency while its unified storage with zNAS offers a single user interface to streamline management of heterogeneous Unix, Linux and Windows file and block data. The Fluid Data architecture increases storage efficiency and utilization by automatically tiering file storage at the block-level, intelligently thin provisioning storage for unstructured data, and delivering rapid data recovery and thin replication. Integrated SAN and NAS management simplifies planning, provisioning and recovery of virtual servers in VMware, Microsoft, Citrix, and Oracle environments.

The ExaGrid Disk-based Backup System is a turnkey, plug-and-play solution that works with existing backup applications and enables faster and more reliable backups and restores. Customers report that backup time is reduced by 30 to 90 percent over traditional tape backup. ExaGrid's patented byte-level data deduplication technology and most recent backup compression, coupled with high-quality SATA storage, reduces the amount of disk space needed by a range of 10:1 to as high as 50:1, or more, resulting in a price that's often less than traditional tape-based backup.

About PTS Data Center Solutions
Experts for Your Always Available Data Center


PTS Data Center Solutions specializes in the business strategy, planning, designing, engineering, constructing, commissioning, implementing, maintaining, and managing of data center and computer room environments from both the facility and IT perspectives.

Founded in 1998, PTS is a consulting, design/engineering, and construction firm providing turnkey solutions, and offering a broad range of data center, computer room, and technical space project experience. PTS employs industry best practices in integrating proven, ‘best-of-breed’, critical infrastructure technologies that result in always available, scalable, redundant, fault-tolerant, manageable, and maintainable mission critical environments.

In every engagement, PTS applies a disciplined, consultative approach to systematically survey and assess the situation and then develop effective plans for seizing opportunities and overcoming obstacles. And, PTS offers a full complement of services—from business strategy and planning to facilities engineering to IT design and implementation—to help transform those plans into reality.

For more information and news, visit the PTS website at www.PTSdcs.com.

About Compellent

Compellent Technologies (NYSE: CML) provides Fluid Data storage that automates the movement and management of data at a granular level, enabling organizations to constantly adapt to change, slash costs and secure information against downtime and disaster. This patented, built-in storage intelligence easily delivers significant efficiency, scalability and flexibility. With an all-channel sales network in 35 countries, Compellent is one of the fastest growing enterprise storage companies in the world.

For more information, visit the Compellent website at www.compellent.com.

About ExaGrid Systems

ExaGrid Systems offers the only disk-based backup appliance with data deduplication purpose-built for backup that leverages a unique architecture optimized for performance, scalability and price. The combination of post-process deduplication, most recent backup cache, and GRID scalability enables IT departments to achieve the shortest backup window and the fastest, most reliable restores, tape copy, and disaster recovery without performance degradation or forklift upgrades as data grows. With offices and distribution worldwide, ExaGrid has more than 2,400 systems installed at 600 customers, and more than 170 published customer success stories.

For more information, visit the ExaGrid website at www.exagrid.com.

# # #

Contact Information:

Larry Davis
PTS Data Center Solutions
201-337-3833 ext. 123
ldavis@ptsdcs.com

Liem Nguyen
Compellent Technologies
952-294-2851
liem.nguyen@compellent.com

Bill Hobbib
ExaGrid Systems
508-898-2872 ext. 286
bhobbib@exagrid.com

Saturday, January 29, 2011

To COLO or Not To COLO Part II

There are many valid reasons to COLO or outsource part of your data processing and storage requirements, but we are finding that there are many misconceptions about cost benefits in making a decision to COLO and that cost is typically the determining factor even though there is no real savings. What is often overlooked in evaluating data center strategy options; owning and operating a data center versus COLO space is that even if I outsource the processing and data storage I cannot outsource the need for a local network and facility support infrastructure so I still need an environmentally controlled data center with conditioned power and back-up to support my local network, WAN connectivity, security & phone systems. You can never outsource the entire facilities mission critical infrastructure can you?

For a true comparison, we need to look at hosted space for my processing & data storage while owning a small data center to support my local network, facility and safety equipment with power & bandwidth costs for both the local & COLO spaces versus owning the data center to accommodate my processing, data storage, network, facility and safety equipment with its operating costs to support everything in that single facility. With the COLO option we can reduce CAPEX from having to expand the mission critical facility by hosting the need for additional servers & data storage, but building with modular scalable data center solutions can accomplish that goal with financing and an added bonus of tax depreciation. There are cases where costs for power in a location are over .20 per kWr that hosting becomes more attractive for my processing and data storage, but it would still be less costly to relocate your processing and data storage to an area with lower utility costs and continue to own as hosting facilities always have a mark-up on at least one facet of space, power, bandwidth and support. While COLO has a lower initial CAPEX, its higher OPEX absolutely ensures the COLO model will always be more expensive in the long run. So if COLO isn’t less expensive in the long run, why are COLO facilities popping up like rabbits in springtime?

The 3 real reasons to host & the cause of the COLO boom are:
1.) We can't keep up with the expansion demand; we’re going to run out of space, power or cooling for our processing and data storage before we can alter our facilities to accommodate the growth
2.) We don't have the internal expertise to effectively plan, build, manage and operate our own data centers to the availability requirements of our businesses. I'll expand on this one a little to say that many organizations haven't effectively planned, designed or engineered their data centers in the past so they only got 3 years out of their 10 year data center plan. They built structured cabling or power infrastructure to meet their needs for bandwidth and power today so their data center quickly became outdated. For organizations like this data centers were a bad investment. Perhaps they should look to make improvements in their decision making in this area or rely more on effective consulting engineers.
3.) We don't want to be in the business of owning and operating a data center and want to focus our attention to our core business. Careful with this one as I've yet to see an organization operate a facility without a network, security system and phone system which require a small data center, of course we can outsource the operations and maintenance of a small data center’s operation but not the responsibility.

If we are doing an effective job with management and decision making, it will always be less expensive in the long run to own and operate our data centers. Stakeholders and decision makers should be more careful in their dreams of getting out of the data center business as well because nowadays it is the business. COLO facilities don't alleviate us of the responsibility for effectively protecting and managing our mission critical assets. COLO facilities can only reduce the data processing & storage components to deliver what might be unobtainable in our existing facilities or difficult to obtain in time given an aggressive IT expansion in our own facilities. Yes there are numerous ways to effectively shed some of the responsibility, with hosting effectively shedding some of the processing, data storage and DR responsibilities, but we will never get away from all of the responsibility for our data center or the ultimate responsibility.

Wednesday, March 23, 2016

PTS Joins the Hedvig CloudScale Partner Program

Hedvig distributed storage platform
Oakland, NJ - 03/23/2016 - PTS Data Center Solutions, Inc. announced that it has joined the Hedvig CloudScale Partner Program. As a member of the Hedvig CloudScale Partner Program, PTS Data Center Solutions is able to craft a simple, cost-effective storage solution for speedy implementation in your enterprise.

"The Hedvig partner program is designed to deliver new value to customers by enabling the development of a cloud-like storage infrastructure that takes advantage of emerging trends around commodity infrastructure, hyperconvergence, flash, and hybrid cloud. The Hedvig Distributed Storage Platform provides our partners with a new, modern data center solution, helping customers move faster and compete better," said Phil Williams, VP of Business Development & Channels at Hedvig. "We welcome PTS Data Center Solutions to the growing ecosystem of Hedvig partners advancing cloud and software-defined storage (SDS) practices."

"The Hedvig CloudScale Partner Program will allow PTS to enhance our practice around the software-defined data center (SDDC) and cloud space. Hedvig's program provides us with a range of benefits including training, reduced implementation costs, discounts, and access to Hedvig technology alliances. As a result, we will have a differentiated storage offering that helps our customers achieve greater speed with less risk than legacy storage - all by using a single, streamlined platform that supports any application in any server, VM, or container environment," said Pete Sacco, President and Founder of PTS.



To learn more about Hedvig:
Visit: Hedvig Distributed Storage Platform
Download: Hedvig Brochure

Tuesday, August 02, 2016

Solve Data Challenges Like the Web Giants with this Free Ebook

Hyperscale Storage Hedvig PTS Free Ebook: Hyperscale Storage for Dummies

  • How hyperscale IT evolved and why it’s needed for a modern enterprise
  • Ten reasons hyperscale storage reduces costs and improves business alignment for enterprises
  • A scorecard to help determine the right storage strategy for your enterprise
This book, compliments of Hedvig and PTS, explores all the hype around hyperconverged and hyperscale storage and explains these storage technologies and architectures to help you understand the similarities and differences of both approaches. We’ll help you determine the best solution to support your organization and examine virtualization, OpenStack, Docker and backup use cases. You’ll be ready to make the business case – whether it’s hyperscale, hyperconverged, or a hybrid mix of both storage strategies for your enterprise.

Friday, August 01, 2014

Tintri: Smart Storage that Sees, Learns and Adapts


Our clients often come to realize that the performance demands of their virtualized databases, applications, and VDI are not being met by their current IT infrastructure technology. In our experience, we find the culprit to be the storage technology itself.


Tintri Smart StorageTintri is a near "zero" management NFS storage solution built specifically to manage virtual environments for servers, desktops, test development, and production workloads. This means no LUN's, volumes, or tiers to manage. Tintri's file system is built from the ground up, allowing you to solve a lot of the complexities and inefficiencies traditional storage brings to the table.

PTS has partnered with Tintri to make available a no cost nor obligation, proof-of-concept (POC) array to test for yourself. Please contact PTS today to see if you qualify: (201) 337-3833 or email us at: TintriPOC@PTSdcs.com

Wednesday, April 08, 2015

PTS Virtual Desktop Infrastructure (VDI) Solutions with Tintri and VMware

VDI, or Virtual Desktop Infrastructure, is a key initiative for many organizations looking to reduce administrative overhead while providing a more secure, flexible and reliable desktop computing environment for end users. Proper planning and good decision making are required to ensure a successful deployment. Choosing the right virtualization platform to host the virtual desktop implementation is often the first major decision and can make or break the entire transformation.


VDI Solutions with Tintri and VMware


The key to success of any VDI initiative is choosing the right virtualization software. VMware Horizon™ Suite, and more specifically VMware Horizon View™, delivers a personalized high fidelity experience for end users across all sessions and devices. It enables higher availability and agility of desktop services unmatched by traditional PCs, while reducing the total cost of desktop ownership by up to 50%. VMware Horizon View end users achieve higher levels of productivity and the freedom to access desktops from more devices and locations.

Tintri Zero Management Storage™ was designed to address the mismatch between traditional storage solutions and the demands of virtualized environments. Built on the industry’s only intelligent VM-aware storage architecture, Tintri VMstore has the intelligence to deliver unparalleled performance and efficiency, and end-to-end insights into the storage infrastructure for unmatched VM control. With the high-performing, validated VMware® Horizon View™ and Tintri reference architecture, you can:
  • Lower risk and cost-effectively deploy and scale desktop virtualization with as many as 1,000 end-users per Tintri VMstore
  • Deliver a high performing and consistent end-user experience with Ultrabook levels of performance
  • Keep end users connected and productive from a wide variety of clients with affordable easy to manage continuity
VDI Deployment
PTS can deploy VDI from within PTS’ Cloud Data Center or from a hosted deployment. Contact PTS with your VDI needs, and we'll tailor a solution to match your requirements in your preferred environment.

Friday, October 17, 2014

Taneja Group Field Report: Massively Scalable, Intrinsically Simple: Tintri’s Low TCO For The Virtualized Data Center

Taneja Group Field Report Tintri Virtualized Data Center Fast-growing virtualized environments present a thorny storage challenge to IT. Whether mission-critical applications with demanding SLAs, VDI rollouts with boot storms, or deploying a private cloud for large dev & test environments: delivering virtualized environments and cloud deployments using traditional storage can stall or break a virtualization project.

Flash technology is certainly part of the solution to performance challenges posed by virtualized workloads, but can be prohibitively expensive to broadly implement across the environment. Although flash can be deployed in a number of targeted ways and placed in the infrastructure, the more it is tied down to specific hosts and workloads, the less benefit it provides to the overall production environment. This in turn causes more management overhead.

Taneja Group ran Tintri VMstore storage through their hands-on validation lab and documented significant large factors of improvement over traditional storage. Those factors accrue through Tintri’s cost-effective acquisition, simplicity and ease of deployment and data migration, effective high performance and availability and smooth expansion over time. This Field Report validates their impressive lab findings: Tintri’s approach provides significantly lower TCO than traditional storage solutions.


Friday, September 19, 2014

Tintri Provides Competitive Edge for
PTS Data Center Solutions’ Hosted Cloud Service

Tintri Multi-Tenancy Storage Delivers High Performance, Management Simplicity, and Comprehensive Visibility for VDI and DR-as-a-Service Offerings

Tintri Case Study: Competitive Edge for PTS Hosted Cloud Service"Tintri’s approach to VM- and application-aware storage is a game changer. It’s giving me a distinct advantage over all of the other cloud providers, and there are a myriad of them out there. With Tintri, we will be one of the first service providers with the ability to do multitenancy for VDI and DR, with a virtually optimized high IOPS array."
Pete Sacco
President and Founder
PTS Data Center Solutions

Key Challenges
  • Experiencing inadequate performance in hosted desktop environment
  • Existing storage platform was difficult to manage
  • Lacked visibility into storage arrays at the VM level for troubleshooting and optimization 
Primary Use Case
  • Cloud-based, hosted desktop virtualization and Disaster Recovery-as-a-service.
Virtualization environment
  • VMware® vSphere™ 5. 5
  • VMware Horizon 6.0
  • Traditional storage: Dell EqualLogic and NetApp FAS arrays
VM profile
  • SQL Server 2008
  • Application Servers 2008r2
    - Veeam Backup
    - SharePoint
  • Web Servers 2008r2
  • Active Directory
  • Virtual Desktop Server 2012 with MS Windows v8.1 Desktops (POC)
Tintri Solution
  • Tintri VMstore™ T650
Business Benefits
  • Obtained higher performance for VDI and DR
  • Increased management simplicity and visibility
  • Provided multi-tenancy, enabling PTS to offer cost-effective DR-as-a-service

Monday, April 13, 2009

Going Green with Data Center Storage

Just saw an interesting article in Enterprise Strategies about the use of magnetic tape as an energy-efficient storage solution. In “Tape’s Role in the Green Data Center,” Mark Ferelli discusses how tape technology is making a comeback by helping to keep the data center green as utility bills rise. He explains:

The efficient use of disk can help with data center greening when a user reads and writes to the densest possible disk array to ensure capacity is maximized and more disk is not bought unnecessarily.

In archiving, on the other hand, the greenest option is tape, which uses less power and produces a lower heat output. This not only eases the bite of the utility bill but places less strain on HVAC systems. In contrast, the case can be made that using disk for archiving does more harm since disks that spin constantly use much more power and generate more heat.


Ferelli also takes a look at alternative power and cooling solutions, such as MAID (Massive Array of Idle Disks) storage arrays, in comparison with tape-based storage.

What’s been your experience with energy-efficient storage technology? Do tape-based systems offer better power savings versus disk-based solutions?

Wednesday, June 15, 2016

AFCOM Midlantic Chapter Meeting: June 29, 2016, Claymont, DE


Please join Andrew Graham of PTS at AFCOM for their 2nd chapter meeting of 2016.

Agenda (*Subject to Change)
9:30 AM:  Presentation: J. David Winder, Suppression Systems Inc.
SigniFire Video Detection Imaging – This camera provides detection of fire and smoke faster than any of the traditional detection methods.  Where all of the standard devices require the smoke or flame to come to them, SigniFire “goes to the fire”. In tests on the camera with other devices, 88% of the time the SigniFire went into alarm first. This product can be used in support areas of Data Center facilities.  Electrical, Switch Gear, UPS, & Generator Rooms are ideal for these video detectors.  The technology can also be used with most existing cameras in the facility through our custom server.

10:30 AM:  Break

10:45 AM:
  Presentation, Daniel Kane, Hedvig, Inc.,  The Impact of Hyper-Converged, Hyper-Scaled and Software Defined Storage on Data Center Services
Current approaches to storage are broken and can’t keep pace with the rate of change in today’s modern business. Businesses generate, analyze, use, and store more data than ever before. As a result, IT can’t even accurately predict data storage requirements three months out, never mind the three to five years that typify storage refreshes. A new approach to data center infrastructure is needed to help companies dynamically adapt to rapidly changing business requirements.

11:45 AM:
  Lunch, Networking and Complimentary Wine Tasting!


Where: Total Wine & More, 691 Naamans Road, Claymont, DE 19703
Driving Directions Here
 

When: Wednesday, June 29, 2016, from 9:00am – 1:00pm

Friday, December 12, 2014

Zerto Disaster Recovery / Business Continuity Software for Virtualized Data Centers and Cloud Environments

Zerto provides enterprise-class disaster recovery (DR) and business continuity (BC) software specifically for virtualized data centers and cloud environments. Zerto’s award winning solution provides enterprises with data replication & recovery designed specifically for virtualized infrastructure and the cloud. Zerto Virtual Replication is the industry’s first hypervisor-based replication solution for tier-one applications, replacing traditional array-based BC/DR solutions that were not built to deal with the virtual paradigm.
  • Zerto Disaster Recovery for a Virtualized World Zerto’s award winning hypervisor-based replication software enables alignment of your Business Continuity & Disaster Recovery (BCDR) plan with your IT strategy. By using hypervisor-based data replication, you can reduce DR complexity and hardware costs and still protect your mission-critical virtualized applications.
     
  • Zerto Replication for VMware Zerto’s replication for VMware enables automated data recovery, failover and failback and lets you select any VM in VMware’s vCenter. It’s that simple – no storage configuration necessary, no agent installation on guest required.
     
  • Zerto Recovery for Hyper-V Zerto Virtual Replication, the industry’s first hypervisor-based replication solution for VMware environments, is now available for Microsoft Hyper-V. Purpose-built for production workloads and deployed on a virtual infrastructure, Zerto Virtual Replication is the only technology that combines near-continuous replication with block-level, application-consistent data protection across hosts and storage.
     
  • Zerto Hypervisor Replication Zerto offers a virtual-aware, software-only, tier-one, enterprise-class replication solution purpose-built for virtual environments. By moving replication up the stack from the storage layer into the hypervisor, Zerto created the first and only replication solution that delivers enterprise-class, virtual replication and BC/DR capabilities for the data center and the cloud.

Wednesday, June 12, 2013

Tips for use of VMware vCenter Site Recovery Manager

There are various ways to deliver Backup & Disaster Recovery for your enterprise. Backup, which is a necessary requirement for Disaster Recovery includes tape, local disk, remote disk, or some other means of storing your data in case of IT equipment failure or loss. For Disaster Recovery,  PTS Data Center Solutions has presented solutions which include all-in-one appliances, co-location disaster recovery service providers and Storage Area Network or SAN replication. VMware vCenter Site Recovery Manager (SRM) is an excellent approach to consider given:
  • Automated migration and site recovery
  • Integration with your virtualized environment if you already leverage VMware solutions
  • Non-disruptive testing on the site recovery environment
  • Simple recovery plan management 
vCenter SRM will require replication of your server and storage environment offsite at a secondary, disaster recovery site. However, with the right expertise and experience, the control and consistent failover results in a manageable disaster recovery plan. VMware provides a series of technical tips for consideration when you are ready to move forward:
  1. Start small with a single application or service before implementing across your entire enterprise
  2. Learn and address application dependencies to confirm applications are available at the recovery site for the services that must run there
  3. Determine the best replication tool (VMware or a 3rd party) for your situation
  4. Load the recovery environment with data even if it is slightly stale to synchronize quickly.
  5. Organize data by logical failover groups
  6. Make sure storage replication adapters are up to date
  7. Orchestrate the sequence in which VMs start at the recovery site to prioritize key groups and their dependencies
  8. Build multiple recovery plans with common protection groups that fail over together
  9. Make sure your VMware software is up-to-date at all times
  10. Perform frequent recovery plan testing, particularly in advance of any storm warnings
To learn more, contact PTS or download the VMware vCenter Site Recovery Manager Tech Tip (registration required).

Friday, November 09, 2012

NJ Technology Council - Data Center Summit

PTS Data Center Solutions will be a conference sponsor for the 2012 New Jersey Technology Council Data Center Summit. Titled Working in the Clouds, the focus of the event is on the latest trends and innovative technologies driving the emergence of Next Generation Data Centers. There will be two panel discussions and PTS Data Center Solutions Founder & President, Pete Sacco, will be a panelist for the DCIM Challenges and Opportunities panel in the morning. This panel discussion will examine the world of Data Center Infrastructure Management as a catalyst to increase energy efficiency and control underlying data center operating costs.

2012 New Jersey Technology Council Data Center SummitThe afternoon panel entitled Data Center Options - Deployment Challenges - Solutions brings IT leaders from different industries together to share their data center experiences from due diligence to deployment. Solutions providers will offer examples of client objectives and services provided. The goal of this panel is to help you sort through identifying your data storage needs and the options and solutions that can help you achieve maximum return. If you are battling an IT deployment or storage problem, PTS can help you through our IT Solutions Group. We have a team of engineering experts including network and systems architects, server and storage engineers, virtualization engineers, and other IT-focused technical staff.

Who should attend this event?
  • C-level executives (CEO / CIO / COO / CFO / CTO)
  • Data Center Facilities Managers and Engineers, IT and Infrastructure Managers, Data Center Managers
  • Directors and Consultants, IT Directors, Infrastructure Directors, IT Consultants
  • Business Analysts, Finance Directors & Managers
When: December 13, 2012, from 8:30 AM to 3:00 PM
Where: Eisenhower Conference Center, Livingston, NJ 07039

Tuesday, June 09, 2009

Drug Companies Put Cloud Computing to the Test

Traditionally characterized as "late adopters" when it comes to their use of information technology (IT), major pharmaceutical companies are now setting their sights on cloud computing.

Rick Mullin at Chemical & Engineering News (C&EN) explores how Pfizer, Eli Lilly & Co., Johnson & Johnson, Genentech and other big drug firms are now starting to push data storage and processing onto the Internet to be managed for them by companies such as Amazon, Google, and Microsoft on computers in undisclosed locations. In the cover story, “The New Computing Pioneers”, Mullin explains:

“The advantages of cloud computing to drug companies include storage of large amounts of data as well as lower cost, faster processing of those data. Users are able to employ almost any type of Web-based computing application. Researchers at the Biotechnology & Bioengineering Center at the Medical College of Wisconsin, for example, recently published a paper on the viability of using Amazon's cloud-computing service for low-cost, scalable proteomics data processing in the Journal of Proteome Research (DOI: 10.1021/pr800970z).”


While the savings in terms of cost and time are significant (particularly in terms of accelerated research), this is still new territory. Data security and a lack of standards for distributed storage and processing are issues when you consider the amount of sensitive data that the pharmaceutical sector must manage. Drug makers are left to decide whether it’s smarter to build the necessary infrastructure in-house or to shift their increasing computing burdens to the cloud.

Monday, January 19, 2009

Data Centers Understaffed and Underutilized?

The following news snippet from SearchStorage.com caught my eye and I couldn’t resist sharing it here:

Symantec Corp.'s State of the Data Center 2008 report paints a picture of understaffed data centers and underutilized storage systems.

The report, based on a survey of 1,600 enterprise data center managers and executives, found storage utilization at 50%. The survey also discovered that staffing remains a crucial issue, with 36% of respondents saying their firms are understaffed. Only 4% say they are overstaffed. Furthermore, 43% state that finding qualified applicants is a problem.

Really interesting numbers, particularly when it comes to staffing issues. With so many layoffs and other cutbacks happening, it’s not so surprising that firms feel understaffed. However, with the national unemployment rate reaching 7.2 percent for December, I don’t think finding qualified applicants will be as much of a problem in 2009. As for the underutilization of storage systems, this is a major contributor to high data center costs. If corporate budgets continue to get slashed, I can guarantee that virtualization is going to stay right at the top of most data center managers to-do lists for the foreseeable future.

(By the way, if you’re an unemployed techie, you might want to check out this article from CIO.com. Socialtext is offering its social networking tools free to laid-off workers who want to form alumni networks and share job leads.)

Friday, June 08, 2007

Recommended Reading: “The New Data Center”

There’s an interesting piece up at NetworkWorld.com on the leading trends in data center storage, titled “The New Style of Storage.” It touches on a variety of topics including e-discovery, eco-friendly storage technology, and virtualization.

This is part 3 in a six-part series that examines the latest technologies and practices for building “the New Data Center.” Taken together, it’s a bit of a long-read, but well worth the time. Check it out when you have a chance.

Be sure to take a look at parts 1 and 2, as well:
Part 1, The New Data Center –
Trends, Products & Practices for Next-Gen IT Infrastructure
Part 2, Defending Your Net – Tools and Tactics for Enterprise IT Security

Want to weigh in on what you’ve read? I’d love to hear it. Post your thoughts on the comments page for this entry.

Thursday, April 20, 2017

The Reality of Virtualization


http://pts-itsg.com/data-center-virtualization-strategies/
Virtualization is the process of creating a software-based (or virtual) representation of something rather than a physical one, as it applies to applications, servers, storage, and networks. It and is the most effective way to reduce IT expenses while boosting efficiency and agility.  The majority (over 80%) of business workloads today are virtualized, though small- to medium-sized businesses are lagging behind at about 40%.

Are you Ready to Virtualize?

The simple answer is that you can't afford not to virtualize.  You will be spending significantly more on IT than businesses that rely solely on physical resources. Additionally, your IT will be more complex, and thus, far less efficient, and as a result, you will more likely suffer disruptions in business, and experience lost revenue.

Today's Virtualization Platform Prepares you for Tomorrow

New business and IT initiatives will require the right technology platform. Virtualization allows greater flexibility in handling new opportunities, making deployment and scaling of resources faster, cheaper, and more efficient. Cloud, mobility, big data, social media, and IT consumerization are realities of today's workforce, and businesses must virtualize to keep up.

To learn more about PTS’ Virtualization Consulting Services, contact PTS or visit:

Wednesday, January 02, 2013

Event Follow-up: Is Your Disaster Recovery Approach a Disaster?

PTS Data Center Solutions, in conjunction with Quorum, hosted a particularly relevant event on December 4th. With over 20 industry executives and Backup & Disaster Recovery experts meeting at the Chart House in Weehawken, NJ, PTS and Quorum discussed the need for improved backup and disaster recovery solutions aimed at the Small- to Mid-size business sector.

"The event was originally scheduled for November 7th but we all know what had just taken place the week before - Hurricane Sandy", said Larry Davis, VP, IT Solutions Group for PTS. "If we could have only spread the word earlier and gotten the Quorum solution out to clients without a clear Disaster Recovery plan, the solution really works for a reasonable price."

Developed by Quorum engineers several years ago as a simple to deploy and use alternative to expensive redundant server, storage, and virtualization platform approaches, the Quorum solution has been a hit within market sectors ranging from:
  • Schools
  • Banks
  • Financial Services
  • Law Practices
  • Accounting Firms
  • Manufacturers
  • Municipalities
With premises-based appliances, cloud solutions available for offsite recovery, and archive systems for long term storage requirements, the Quorum onQ solution can be deployed rapidly without any other hardware or software needed.


At the event, Quorum engineers provided a live demonstration of a server failure and the One-Click Recovery™ inherent in the onQ solution's design:
  • Current Forever: Each ultra-efficient update is merged into the onQ device which houses virtual machine recovery nodes, full current images of client servers and virtual servers.
  • Ready-to-Run: The approach doesn't wait until you need to recover to build your virtual recovery nodes, allowing one-click recovery at any time.
  • Point-in-Time Recovery: Even though changes are merged into the ready-to-run recovery node, you can restore files or an entire system to a prior state. This is a perfect fit for business and organizations needing the ability to store and recover 7 years of data for regulatory purposes.
To learn more, visit PTS' website, watch the onQ video on the YouTube Data Center channel, or contact PTS at sales@ptsdcs.com