"Let me know when dedupe is here," said W. Curtis Preston, vice president of data protection services with GlassHouse Technologies Inc. VTLs from Sepaton Inc., Diligent Technologies Corp. and FalconStor Software Inc. are already offering it -- something analysts have been pointing out as a missing ingredient with NetApp's VTL since the beginning. "The rest of the industry is beginning to announce general availability on products that cut disk storage down 25 to 1, not 2 to 1," he said.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
For its part, NetApp remained tight-lipped about when it plans to add the feature. "NetApp doesn't comment on product roadmaps," a company spokesperson wrote to SearchStorage.com in an email.
"I think they are being careful to make sure that, as with their hardware compression, their deduplication feature doesn't sacrifice performance," Biggar said. "They want to make sure it will fit the expectations of their high-end enterprise customer base."
With compression enabled, the new VTL700 and VTL1400 models, which replace the original VTL600 and VTL1200 models, deliver sustained write performance of 850 and 1,700 megabytes per second (MBps) respectively, using 2 to1 compression. A new model added to the line, the VTL300, has a maximum capacity of 53 terabytes (TB) and sustained throughput of 500 MBps with compression.
According to Krish Padmanabhan, general manager of the heterogeneous data protection business unit for NetApp, the 300 is meant to be an "entry-level" model, but it's clearly not for beginners at $99,000 for a base configuration of 10 TB. The 700, meanwhile, starts at $154,000 and the 1400 lists for $238,000. By comparison, the starting price for the VTL600 was $114,000.
The older models also have compression capabilities, but it's software based, Padmanabhan said, meaning that VTL performance can take up to a 50% hit with compression turned on.
"Users are always looking for ways to reduce capacity when it comes to disk backup," Biggar said. "There's a positive ripple effect -- the more data is reduced, the more loads on networks and the whole IT infrastructure is improved."
The puzzlement over dedupe remains, Biggar said, but compression "is at least a good first step."
Another positive feature about the way NetApp does compression, according to analysts, is that the VTL decompresses the data again before it's written to tape, letting the tape drive perform compression on its own chip using its own algorithm, a way to ensure that the physical tape is readable by the tape drive. "Physical tapes have got to be usable without the VTL," Preston said.The throughput performance, which also extends to the back end, is also good for keeping new turbo tape drives' pipelines full, according to Preston, who has warned in the past that high demands for throughput on the part of tape drives, like the 120 MBps T10000 drive from Sun Microsystems Inc., easily outstrip the pipeline coming to them from backup servers limited by network bandwidth. This can cause physical tapes waiting for backup server output to catch up, to "shoeshine" or run the tape back and forth, which can damage the media and the drive.
Meanwhile, Preston disputed NetApp's claim that it's better integrated with the tape archive than its competitors. "That won't really happen until the backup applications actually add features that recognize disk-based backup, including VTLs," he said.