Three ways to create clustered storage

Clustered storage systems run on storage servers, NAS gateways and hosts. Here's how to determine which clustered file-system architecture is best for your needs and storage environment.

Clustered storage systems run on storage servers, NAS gateways and hosts. Here's how to determine which clustered file-system architecture is best for your needs and storage environment.

Clustered file systems (CFS) offer a practical way to respond to big storage problems such as the proliferation of low-cost servers, application data growth and the need to deliver better application performance. A CFS pulls together and shares the excess storage capacity that's often available but hidden on storage networks. In doing so, a CFS increases storage utilization rates, delivers performance typically found only in high-end arrays and gives users an economical way to scale their architectures.

There are three ways to deploy a CFS: on storage servers, NAS gateways and hosts. Any server in the cluster can access any block of storage managed by the cluster. Most CFS also integrate the volume manager with the file system. This allows the CFS to break large files into blocks called extents, and to stripe those extents across different storage arrays to improve I/O performance.

There are several key questions that need to be answered before selecting a CFS:

  • Can the CFS make use of existing storage and network resources?
  • How difficult is it to install and configure?
  • How does the CFS manage data integrity?
  • Can it scale performance and capacity linearly and independently?
  • What problem is the CFS best suited to solve?

Clustered storage systems

Clustered storage systems are composed of bricks (servers preconfigured with set amounts of CPU, cache and storage) or blades. Each brick is loaded with the vendor's CFS software that controls and shares the processing memory and storage resources of the bricks in the cluster; blades are managed by an external server that contains the CFS.

Isilon Systems Inc.'s IQ storage clusters use storage bricks and its CFS called OneFS, which combines four layers of storage management software--file systems, volume management, data protection and high availability--into one logical file system. This integration allows OneFS to configure storage on any of the up to 88 bricks it supports in its clusters and to create volumes up to 528TB. Isilon also gives users the ability to choose between bricks of different sizes, ranging from 1.9TB to 6TB raw. While each brick supports only 12 serial ATA (SATA) disk drives, by offering bricks with different size disk drives, Isilon lets users select bricks that meet specific app performance requirements.

Terrascale Technologies Inc. also uses storage bricks, but places its TerraGrid Cluster File System (CFS) on the clients accessing the bricks. Terrascale built its TerraGrid CFS based on the open-source XFS file system; it's a parallel file system that allows apps running in parallel to simultaneously access the same files. TerraGrid CFS scales to support hundreds of nodes, and lets a server read or write data to any node. However, TerraGrid CFS is available only for Linux servers. Windows or other Unix servers that need to access the storage pool have to go through a Linux NAS gateway that contains TerraGrid CFS.

Panasas Inc.'s ActiveScale Storage Cluster is architected in a manner similar to that of TerraGrid CFS, but it also has some unique characteristics. Like Terrascale, Panasas places agents on all clients accessing its storage, directly supports only Linux servers and allows multiple clients to access back-end storage. But Panasas uses Panasas StorageBlades that hold two 400GB SATA drives each. These drives are virtualized by Panasas DirectorBlades that stripe the data across the StorageBlades. DirectorBlades cluster together to create one "virtual" NFS and CIFS server that can scale I/O at high performance levels.

But problems with the clustered storage systems architecture may surface as more bricks are added to the cluster. It's the responsibility of the CFS to manage each additional module's processor, cache and storage capacity. Failing to keep the cache coherent across the bricks can result in file corruption; however, keeping the cache coherent among all of the bricks generates a lot of chatter and degrades the overall performance of the cluster.

Isilon deals with this issue by designating two or more of its bricks as "owning" bricks for each specific file. Keeping the cache consistent in only a few bricks eliminates much of the chatter among bricks. If the request for a file is received by a brick other than the owning brick, the CFS redirects the request to the owning brick. Once the owning brick receives the request, it directs the CFS to distribute the data writes evenly across all of the storage bricks instead of just the disk drives in the owning bricks.

Isilon's approach meets most application requirements when just a few servers need to access large files sequentially, but this technique falters as the number of servers that need to access data concurrently on multiple bricks grows. In that scenario, the owning bricks wouldn't be able to expeditiously handle all of the redirects coming from the other bricks and performance would degrade.

To avoid this problem, Terrascale's TerraGrid CFS allows any server in the compute cluster to access any data block directly on any brick at any time. This approach eliminates the need for cache coherency among the bricks or for the CFS to add any meta data to the file because the file is locked while the server is directly accessing the blocks of the file.

But none of these products overcomes the two main problems of CFS platforms. First, although SATA drives are well suited for the sequential data access required by applications with large amounts of digital content (such as audio, video and graphics), when used in environments with large amounts of random reads and writes, SATA drive performance is significantly lower than that of higher performing Fibre Channel (FC) drives. The other problem is that these systems don't let users redeploy storage they already own. If you have installed storage you want to use in a cluster, CFS architectures that reside on NAS gateways or client servers should be considered.

Click here to continue reading Three ways to create clustered storage, page 2.



Read more on Integration software and middleware