Three ways to create clustered storage
Clustered storage systems run on storage servers, NAS gateways and hosts. Here's how to determine which clustered file-system architecture is best for your needs and storage environment.
- Can the CFS make use of existing storage and network resources?
- How difficult is it to install and configure?
- How does the CFS manage data integrity?
- Can it scale performance and capacity linearly and independently?
- What problem is the CFS best suited to solve?

Terrascale Technologies Inc. also uses storage bricks, but places its TerraGrid Cluster File System (CFS) on the clients accessing the bricks. Terrascale built its TerraGrid CFS based on the open-source XFS file system; it's a parallel file system that allows apps running in parallel to simultaneously access the same files. TerraGrid CFS scales to support hundreds of nodes, and lets a server read or write data to any node. However, TerraGrid CFS is available only for Linux servers. Windows or other Unix servers that need to access the storage pool have to go through a Linux NAS gateway that contains TerraGrid CFS.
Panasas Inc.'s ActiveScale Storage Cluster is architected in a manner similar to that of TerraGrid CFS, but it also has some unique characteristics. Like Terrascale, Panasas places agents on all clients accessing its storage, directly supports only Linux servers and allows multiple clients to access back-end storage. But Panasas uses Panasas StorageBlades that hold two 400GB SATA drives each. These drives are virtualized by Panasas DirectorBlades that stripe the data across the StorageBlades. DirectorBlades cluster together to create one "virtual" NFS and CIFS server that can scale I/O at high performance levels.But problems with the clustered storage systems architecture may surface as more bricks are added to the cluster. It's the responsibility of the CFS to manage each additional module's processor, cache and storage capacity. Failing to keep the cache coherent across the bricks can result in file corruption; however, keeping the cache coherent among all of the bricks generates a lot of chatter and degrades the overall performance of the cluster.
Isilon deals with this issue by designating two or more of its bricks as "owning" bricks for each specific file. Keeping the cache consistent in only a few bricks eliminates much of the chatter among bricks. If the request for a file is received by a brick other than the owning brick, the CFS redirects the request to the owning brick. Once the owning brick receives the request, it directs the CFS to distribute the data writes evenly across all of the storage bricks instead of just the disk drives in the owning bricks. Isilon's approach meets most application requirements when just a few servers need to access large files sequentially, but this technique falters as the number of servers that need to access data concurrently on multiple bricks grows. In that scenario, the owning bricks wouldn't be able to expeditiously handle all of the redirects coming from the other bricks and performance would degrade. To avoid this problem, Terrascale's TerraGrid CFS allows any server in the compute cluster to access any data block directly on any brick at any time. This approach eliminates the need for cache coherency among the bricks or for the CFS to add any meta data to the file because the file is locked while the server is directly accessing the blocks of the file. But none of these products overcomes the two main problems of CFS platforms. First, although SATA drives are well suited for the sequential data access required by applications with large amounts of digital content (such as audio, video and graphics), when used in environments with large amounts of random reads and writes, SATA drive performance is significantly lower than that of higher performing Fibre Channel (FC) drives. The other problem is that these systems don't let users redeploy storage they already own. If you have installed storage you want to use in a cluster, CFS architectures that reside on NAS gateways or client servers should be considered.Click here to continue reading Three ways to create clustered storage, page 2.