Over the last 20 years data has been delivered in enterprise environments using centralised, consolidated storage infrastructures, writes Chris Evans. These have centred on scale-up SAN and NAS products with high resiliency and availability built in, while scaling to terabytes and eventually multi-petabytes of storage.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
There's no doubt the scale-up market is still strong, but we are starting to see the emergence of offerings that are scale-out, where the storage infrastructure is expanded by adding nodes -- for example servers -- in a loosely-coupled configuration.
Scale-up storage has worked for many years. It was facilitated by the introduction of storage area networks (including Fibre Channel and iSCSI) that removed many of the physical barriers of co-locating storage and servers together.
However, with current levels of scalability, scale-up has problems and many administrators would be terrified at the idea of deploying a 4PB storage array. There are obvious issues, such as managing many different and possibly diverse workloads on the same hardware which, without effective multi-tenancy tools, would continually affect each other.
But there are other more subtle issues. When one piece of hardware supports many lines of business, it can be difficult to arrange maintenance slots. Hardware failures affect all lines of business, as do software upgrades. Replacing scale-up systems can also represent a significant headache and incur significant costs for storage teams.
Read more about scale-out storage
- How does a scale-out NAS environment affect storage management?
- Scientists banish NAS sprawl with Red Hat scale-out NAS software
- Scale-out NAS on the verge of greater adoption
- Scale-out storage systems deep dive
- Comparing scale-out and object-based storage systems
- Scale up vs. scale out: Debate evolves over how to upgrade data centers
- Rethinking high-availability architecture: Scale out to save money
Scale-out addresses the need for growth in a different way. Rather than add more storage to a single array, scale-out solutions grow by adding storage nodes, which are typically either servers or storage appliances.
Nodes are typically connected in a loosely coupled configuration; they can fail independently without taking down a whole cluster of nodes. But, some suppliers have implemented tightly coupled scale-out systems.
Where nodes can act independently, they can also be managed independently -- including rolling upgrades, hardware replacement and scaling. Scale-out systems also better fit the move to hyperscale computing, as demonstrated by the likes of Google and Facebook, and coming to a datacentre near you soon.
NAS platforms are well served too; including Isilon from EMC and, to a certain extent, clustered Data ONTAP from NetApp, although this is more a grouping of node pairs rather than a true scale-out system.
For block storage there are fewer systems available, most notable being SolidFire. Part of the problem here is that Fibre Channel is not well suited to scale-out operations, due to inherent difficulties with multi-pathing. SolidFire is an iSCSI-only system today.
As computing evolves to hyperscale, we may well have seen the end of scale-up storage systems. The transition will take some time, but the storage landscape of 20 years' time will be very different from todays.
Chris Evans is an independent consultant with Langton Blue