QLogic has announced the availability of its FabricCache QLE10000 series product that provides shared SSD flash cache across linked servers.
The product ships as two linked cards that contain 200GB or 400GB of flash cache, plus cache management software and fibre channel host bus adapter (HBA) functionality.
The QLE10000 series – based on QLogic’s Mt Rainier adapter – allows cache on server flash cards to be shared. This allows all the caches in a cluster to share cache and means cache size is not limited to the capacity of one PCIe server flash card.
Additional claimed benefits are that it cuts traffic to and from the main storage repository in the SAN, although it does increase traffic at the edge of the SAN fabric and QLogic admits shared server flash takes a 3% to 6% performance hit (in terms of “transactions per second” in QLogic materials) compared with dedicated one-card-per-server implementations.
Server-side SSD arose to meet the needs of demanding performance needs, such as with transactional and database operations.
Typically, a single server flash card would be installed in a physical server to speed operations without data having to endure latency as it traversed the network. A key shortcoming of this approach is that the data forms a silo in the single-server cache data, which can have potential performance limitations when servers are clustered. This may present data protection issues should a server go down.
Consequently suppliers have been working on sharing data between instances of server cache, such as QLogic’s Mt Rainier and Dell’s Project Hermes.
QLogic’s marketing director Henrik Hansen said the company is responding in particular to the performance needs of enterprise customers with, for example, VMware ESX and SAP server clusters.
Hansen said: “FabricCache allows all caches in the cluster to share the combined cache pool. This means there is no limit from the size of the SSD card and so allows use of smaller cards to accommodate the cache, compared to one card per server.”
Currently only fibre channel connectivity is supported but 10Gbps Ethernet for iSCSI SANs is next on the QLogic roadmap.