Storage caching makes a comeback

A storage caching appliance from Gear6 serves files up to 50 times faster than from disk, but the new device rekindles old arguments over the risks of an in-band device.

Storage caching is making a comeback in the form of a new appliance from startup Gear6 Inc., which sits in front of a network attached storage (NAS) device and serves cached files from random access memory (RAM) up to 50 times faster than from disk, the company claimed.

Menlo Park, Calif.-based Gear6 is still working through a beta program and has no ship date in mind yet, or concrete spec-sheets, but according to industry analysts, its product points to a growing trend in the industry for faster response times from storage.

"IT departments are trying to get to a real-time representation of information and for this you need speed and an improvement in I/O response times," said John Webster, principle IT director at Illuminata Inc. "Caching has a role to play here."

More on NAS
Performance caching: Your new Tier-0  

IBM spruces up storage line
Users evaluate new and improved HP EVA
Brad O'Neill, senior analyst and consultant at the Taneja Group, added that Gear6's Cachefx device, which provides shared cache across a networked back end of storage, is part of a larger trend of virtualization and disassembly that is happening up and down the IT stack right now, from servers, server components, networks and storage.

"It's not surprising then that a company has emerged to say: 'Let's offload a shared memory capability and park it in the network to accelerate specific transactional I/O workloads,' " he said.

Gear6 said the first version of its product will scale to terabytes (TB) of capacity and support Network File System (NFS) only for NAS devices with later versions supporting block-based protocols. Users will be able to decide which applications "run on an accelerated data path" and which simply see the regular disks, according to Tom Shea, president and CEO of Gear6. The appliance is clustered, meaning that if one caching node goes down, users will see a "temporary reduction in the size of the cache," but all other nodes remain up. Gear6 declined to give any more product details at this stage.

Webster said he sees another angle that might spur adoption of this technology. As the cost of energy goes up, sending I/O to spinning disk adds up. "Sending I/O to something that consumes less electricity cycles could become more appealing," he said.

O'Neill noted that the risks with this technology are the same as with most in-band approaches. Users "will be putting a new technology in the data path in front of very expensive storage and production data," and that's risky, he said. This argument brings back memories of the in-band, out-of-band wars in the storage virtualization market.

All of these technologies need to demonstrate high availability and nondisruptive failback to the pre-existing deployment schema. "If the vendor cannot demonstrate that, don't even think about deploying," O'Neill stressed.

From a capital expense perspective, he noted that storage caching technology will probably not be cheap. "Performance caching won't be for everybody, but for larger shops running dynamic I/O profiles through their NAS platforms, this could be a very defensible investment in the long run," he said.

Or, the major storage manufacturers could just add slabs more cache to their existing systems, making a separate appliance unnecessary. However, "It'll probably cost more to add it there," noted Webster.

Read more on IT for small and medium-sized enterprises (SME)