Opinion

Direct-attached storage vs SAN: Clustered DAS model gaining favour in virtualised, solid-state world

Server-located direct-attached storage (DAS) delivers faster and simpler drive I/O to applications than a SAN can. That's the pitch being put forward by Oracle and some startups.

Small breezes in the market started by Sun's Honeycomb filer box in 2004 have strengthened with the launch of Nutanix's Complete Cluster and Oracle's Exadata and Exalogic integrated server/storage products.

In a sense these new developments are a reinvention of the DAS model—a sort of clustered DAS—and a threat to the dominance of SAN in the direct-attached storage vs SAN choice. Originally, the SAN arose as a more efficient way of providing storage to many servers than each having its own DAS.

In the classic SAN arrangement, server-based applications get storage I/Os completed in a reliably short time and data is not lost in transmission. The SAN infrastructure is shared by the many servers accessing it, but SAN data is accessed by individual servers. Portions of the SAN, LUNs, are allocated to specific servers and applications. These applications “think” they are accessing DAS when in fact they are accessing a SAN.

This is not true of network-attached storage (NAS), in which files can be shared among accessing server applications, and here is the point of departure for the reinvented direct-attached model.

For Sun, Honeycomb involved putting NAS disks in the same overall enclosure as multiple servers and using an advanced file system to speed applications up. This idea didn't spread widely outside Sun but did lead, once Sun was bought by Oracle, to the supercharged Exalogic and Exadata boxes.

The classic SAN was created and developed before widespread use of VMware and server virtualisation. As that happened, servers got more cores and more sockets. One physical server with one CPU running Windows became a two-socket, quad-core computer—that is, it had eight cores. These were natural physical resources for virtual machines (VMs), and so our original single-server, single-OS machine became, in effect, eight servers running eight OSes but still in one enclosure.

And so, the server/storage I/O gap—the difference in speed at which servers could process data and networked storage arrays, either NAS or SAN, could serve data—became more marked. Workarounds emerged, such as solid-state drives in the array or flash cache in the servers, or even flash-based or flash-enhanced accelerator boxes sitting in front of the networked storage.

In the light of this, some system architects began to think that networked storage was simply too far away from the applications in the servers. The answer was to bring the servers and the data much closer together.

An iSCSI SAN array supplier, LeftHand Networks, devised the idea of having its software, the iSCSI drive array controller, run as a virtual machine on a server and corraled the server's direct-attached drive arrays into a networked, iSCSI-accessed SAN. In this way, a group of servers could have their individual direct-attached storage grouped together into a virtual storage appliance (VSA).

The SAN still existed but was now crafted from an individual server's DAS instead of existing as a discrete storage array. It then became possible to do the same thing with some EMC array software, such as the VNX.

Then we saw StoneFly introduce its virtual SAN, which does a similar thing as the LeftHand product.

Fusion-io re-emphasises the virtues of DAS with its PCIe-connected flash and its use of a software layer to represent its ioDrives as a pool of storage for applications in the server. We are also now beginning to see the prospect of multi-level cell (MLC) flash enabling a terabyte-sized solid-state memory tier slower than DRAM but much, much faster than disk. Virtualised, multi-core servers fitted with terabyte-class flash memory do not need SAN or NAS disk.

StoneFly and Fusion-io did a deal earlier this year that meant StoneFly's software could virtualise a server's ioDrives into a SAN, combining flash speed and DAS locality. And now along comes Nutanix; it’s Honeycomb reinvented for the block access SAN world, if you like.

Nutanix’s idea is to cluster compute nodes together and have each compute node's storage virtualised into a SAN with a 10 Gigabit Ethernet (GbE) interconnect to link the individual compute node's DAS together. Actually, each compute node runs storage controller software, SOCS (Scale-Out Cluster Storage), as a virtual machine, and the SOCS VMs in each compute node talk and share metadata across this 10 GbE pipe.

Nutanix has also given each compute node a couple of SSDs so we get a combination of DAS locality, flash speed and virtual SAN, with multi-core, multi-socket virtualised servers intended specifically to supercharge VMs. Nutanix customers get an integrated compute/storage box that can be scaled by clustering and does not have any of the Fibre Channel SAN's fabric complexity. Forget running Fibre Channel over Ethernet (FCoE) to a networked block access array, which is the future of Fibre Channel SANs, according to Fibre Channel vendors looking to combine Ethernet economics with Fibre Channel dependability and compatibility. In fact, forget physical SANs altogether and go virtual.

The applications run in virtual servers and read and write data to virtual storage. You can move, says Nutanix, virtual machine, VM and application files and storage as a single entity.

This is a startup speaking, one with a hot box to sell, and we have seen no performance data, but there does seem to be some solid sense here. FCoE depends upon deterministic and lossless Data Centre Ethernet. Virtualised iSCSI-based SANs, such as those from LeftHand (now the HP P4000), StoneFly and Nutanix, will be able to use DCE to remove objections about Ethernet unreliability and say their customers can get Fibre Channel SAN speed and reliability without a Fibre Channel fabric or even the Fibre Channel protocol, with no physical SAN at all.

The same is true for networked filers. We are seeing, it is arguable, a potential reclamation of the SAN, and of NAS, by virtualised DAS, such that VMs in a set of multi-socket, multi-core servers access a pool of virtualised DAS that looks and acts like a SAN or NAS but is actually just plain old DAS with a virtualisation layer in front of it. This could take off, if startups like Nutanix gain traction and if server vendors see the usability of this in kicking out third-party storage array vendor cuckoos, like EMC and NetApp, from their customer bases.

Chris Mellor is storage editor with The Register.

Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in September 2011

 

COMMENTS powered by Disqus  //  Commenting policy