Resellers come around to storage virtualization
Storage virtualisation has been around as a concept for years, but value added resellers are finally finding ways to sell it.
"Adoption has been slower than one would have hoped -- we thought it was going to mirror the Fibre Channel market, but people have moved more cautiously," said Paul Gavazzi, director of solutions architecture for Agilysys Inc., a reseller and integrator based in Mayfield Heights, Ohio.
However, he says that caution is gradually giving way to action, an opinion echoed by Datalink Corp., a reseller in Chanhassen, Minn. According to Scott Robinson, Datalink's chief technology officer, its newly minted storage virtualisation practice has been up and running about six months. "Virtualisation is a word you have to be careful about because it means different things to different people, but that said, we're definitely seeing some increased market interest," Robinson said. "Customers need that technology as the next step in the evolution of a consolidated storage infrastructure."
![]() |
||||
|
![]() |
|||
![]() |
Storage value added resellers (VARs) like Robinson and Gavazzi say they are installing storage virtualisation tools for a number of reasons, ranging from the aforementioned consolidation projects, as well as replication and disaster recovery. Here are a few of the virtualisation tales that these folks can tell.
Flexibility and managementDatalink's Robinson cites one customer, a large company in the financial services industry, that managed a large number of big storage boxes on a SAN and wanted freedom from proprietary high-end storage devices. The goal was to put in a tiered storage environment that was as free as possible from proprietary vendor environments. "They wanted tiered storage, but they wanted to manage it simply, and they wanted flexibility in their storage choices so that they could hold vendors more accountable," Robinson said. It also wanted to manage storage more efficiently by consolidating storage across boxes, rather than within each one.
Robinson worked with the company to install Hitachi Data System Inc.'s (HDS) Universal Storage Platform, HDS's virtualisation architecture. The preexisting high-end storage devices are behind the HDS universal storage platform (USP) box, as is Tier-2 storage that the company also installed. "The HDS unit is a single point of connection for both their Tier-1 and Tier-2 storage," according to Robinson. "There's a single management console within USP that manages them both."
Robinson says that the company will save money long term by better utilising existing storage, as well as being free to choose storage devices based on the most attractive deal, rather than being tied to one vendor. However, he notes, "This is a big investment. USP comes tied to a very large box." Enterprise class shops might find it worth the price of admission, but even USP doesn't ensure complete vendor flexibility. "The challenge is that they made a commitment to the HDS platform, so you can't say that it frees them completely from vendor lock in," he added.Long-term cost control
"Most of our customers are looking for tangible benefits in terms of cost avoidance," Gavazzi said, and storage virtualisation offers the opportunity to bring costs down through better utilisation of storage, as well as the ability to buy cheaper storage for use in a tiered environment. He tells the story of a West Coast teaching hospital that his company recently worked with to install IBM's SAN Volume Controller (SVC) virtualisation software. The hospital was challenged by rapid data growth that strained its storage budget. Moreover, the all-IBM shop traditionally bought high-end systems and used them for every type of application, from mission critical to low end.
The SVC installation saved money on several fronts. The first was through better utilisation of existing storage. "They had on the order of 5 terabytes (TB) to 10 TB of storage installed and were using about 60%, which meant that basically 4 TB was locked up behind whatever storage in whatever array," said Adam Adatepe, senior solutions architect at Agilysys. "Using virtualisation meant that they could use that previously unavailable storage, which helped them avoid the cost of an additional 4 TB of storage going forward."
It also allowed them to implement an integrated tiered environment, in which less vital data could be downloaded off the high-end storage to middle- and low-end storage. The hospital now has the DW 4800 for high end, an older IBM Shark in the midtier and SATA disks on the low end. "We can put Fibre Channel and SATA disks on the same storage server and the SVC will manage it all," Gavazzi said.
More affordable disaster recovery
In the wake of last year's disastrous hurricane season, Weston, Fla.-based Fairway Consulting Group found itself installing virtualisation technology for a number of companies that had found out the hard way the importance of good disaster recovery procedures. James Price, president of the company, tells the story of one client, who he says is the fourth or fifth largest nonprofit charity in the U.S., with about $900 million in assets. Its primary data center is in Miami, and the company suffered an outage after Hurricane Wilma came through. "This was just after Katrina, and they were in the process of releasing about $35,000 of disaster relief funds," Price said. "Then they were hit with Wilma, and because their own systems were down, they couldn't finish the process."
As part of an overhaul of its disaster recovery process, the company decided to set up a remote data center in California, but its recent investment in a Hewlett-Packard Co. (HP) EVA storage array had left them with minimal funds. "They wanted to double storage, have that be highly available and wanted to be in DR configuration with their remote data center in California," according to Price.
They are installing storage virtualization software from DataCore Software Corp. in the Miami center to create a near real-time replicated environment between the two data centers. "They have a 100 MB dedicated circuit between the two, so even though the centers are nearly 3,000 miles away, the latency is really nominal," Price said. Moreover, since DataCore's software is hardware neutral, Price will be able to build the remote site on the cheap. "DataCore doesn't require cutting edge hardware," he said. "We're beefing up some old chassis and combining them with SATA enclosures and building a remote site based on that." The Miami data center, meanwhile, has the HP server, as well a lower end Fibre Channel-based array.
DataCore sits in front of the storage provisioning from the array to the storage client. It virtualises the storage and presents it back out to the client. It also simplifies the replication process. "When I snapshot a virtual [logical unit number] LUN, I'm only snapshotting the delta of data actually written. So you don't have to have the same amount of space available remotely -- just the delta difference. And you can leverage low-cost storage to provide that," Price said.