JumalaSika ltd - Fotolia

Match cache and storage tiers to app needs with new storage media

How to match cache and storage tier needs to the dizzying array of solid state media now available, including flash, QLC/TLC, NVMe, storage-class memory, NVMDimm and MRAM

The concepts of tiering and caching data are as old as computing itself. Both form an effective strategy in achieving the best price/performance ratio for storage in the enterprise.

But, with a huge increase in types of persistent media available today, tiering and caching are becoming more complex than ever.

In this article, we look at approaches being taken by suppliers, in an effort to help storage professionals get some perspective on improving storage performance for the right price.

Tiering and caching look to achieve the best possible ratio of price versus performance for persistent storage. In an ideal world, all data would be stored on the fastest media possible, but this isn’t practical from a cost, capacity or application design perspective.

Tiering is different from caching in that a tier of storage is usually persistent and represents the only copy of that data.

Meanwhile, cache stores a copy of data that is held elsewhere on the system in an called the backing store and is expected to be Volatile.

Use of tiering and caching technologies isn’t an either/or process and in general both are used together to complement overall system performance.

The working set

Caching uses as a characteristic of data called the working set.

At any point in time, an application uses only a subset of the data stored on persistent media in the backing store, such as flash or hard disk drive (HDD) in the server or in shared storage. But it is more cost effective to keep the active working set on the fastest possible media and to swap active and inactive data in and out of the backing store.

Depending on the application, it’s possible that as little as 5% to 10% of data could be active at any one time, and that makes caching an attractive way to increase system performance for the lowest cost.

Unfortunately, caching comes with a number of trade-offs.

, the system – either a storage array or processes on a server – has to guess which data will be active and which will not. If the caching algorithm gets it wrong, input/output (I/O) will occur at the speed of the backing store, resulting in lower performance. This is known as a “cache miss”.

The caching process uses system resources, so this has to be taken into consideration when looking at overall application performance.

Second, cache size is important to application performance. If the cache is too small, too much I/O will be directed to the backing store, resulting in lower performance. The overhead of caching can also result in excessive processor utilisation as the algorithms work to keep active data in the cache itself.

Tiers of storage

Where the overhead of implementing a cache becomes too high, it may be more practical to tier data.

A good example is where I/O is heavily random and read-biased. In this instance, a caching algorithm would find it hard or almost impossible to pre-fetch data into the cache, thus negating the performance of fast cache media.

In this case, moving an entire dataset to a faster tier of storage is a better approach because it offers much more consistent I/O. This was the thinking behind the original all-flash suppliers, who aimed their products at applications where caching was less effective than moving the entire working set onto a flash platform.

The challenge in implementing tiering is to know how much of each tier to use. This can be a problem at the server and storage system level. Two problems can occur.

Firstly, an individual application may not have enough storage of the right tier and need to be re-aligned with more or less of a particular storage tier.

Secondly, with shared storage, an array or appliance may not have enough of a specific tier to meet demand. Regularly adding and removing storage from an array isn’t a practical process.

Tiering also has a third issue to consider. As data volumes increase and the workload mix changes, some data may need to move between tiers to get the best levels of performance.

Early storage platforms moved entire volumes, but quickly implemented tiering at the sub-volume level. Today, most systems implement automated tiering because manual rebalancing would be impossible to achieve, and system administrators simply couldn’t keep up.

Caching and tiering with media

In the days of caching, active data was simply kept in dynamic random access (Dram). Today, we have a wide range of fast media that can be used for cache and tier.

Storage Class , or Persistent is a set of technologies that are fast like Dram and byte-addressable, but have persistence across power cycles. Byte addressability means storage can be read and written to, like Dram, one byte at a time, rather than as an entire block of storage.

SCM products, such as Intel Optane (3D-Xpoint), have smaller capacities than equivalent NVMe SSDs (at 16/32GB), but do offer much better latency (around 7µs read, 18µs write) with much greater endurance. This makes them perfect as cache devices. The persistence adds an extra degree of resilience, potentially reducing the need to flush data to the backing store or re-warm the cache on a server power cycle.

Persistence on the bus

NVMe SSDs sit on the PCIe bus within a server. Meanwhile, NVDimm devices are deployed on the bus and offer persistent storage at faster rates than SCM.

Today, there is only one type of device available, the NVDimm-N, which uses Dram backed by Nand flash for persistence. Latency figures for NVDimm-N are measured in nanoseconds, so orders of magnitude faster than even Optane.

However, servers have to support NVDimm technology, so existing Dram Dimms can’t simply be replaced. Intel is promising Optane-based NVDimm products in 2H2018.

Another persistent technology available today is Magneto-resistive RAM (MRAM).

It has been in development for some time and uses magnetic resistance rather than an electrical charge to store persistent state. Everspin Technologies sells NVRAM products such as the nvNitro Storage Accelerator, with up to 1GB of capacity and latency figures as low as 6µs (read) and 7µs (write).

Although capacities are low, suppliers are using this technology in storage products today. IBM recently added MRAM as a replacement for supercaps in their latest FlashSystem 9100 modules.

Multiple tiers

The products described so far offer very low latency with relatively small capacity, and this makes them suitable for caching. For tiering, the requirements are different. Performance is still important, but we need large capacity devices too.

Nand flash continues to be the biggest player in this part of the market. The range of available products has been expanded in terms of capacity and performance, with some flash devices hitting 100TB (although the more practical maximum capacity is around 30TB).

QLC Nand, with up to four bits of data per cell, has started to emerge. Micron has released a QLC product for the datacentre, with an initial 7.68TB of capacity and the capability for larger devices. QLC offers a lower price point than other Nand products, but with much lower write endurance and lower performance.

We can expect to see QLC deployed as a read tier for use cases such as analytics, alongside TLC flash. Samsung and Toshiba have also announced consumer QLC products, so expect to see enterprise versions soon.

At the top end of the flash market, Samsung has released Z-Nand, a low-latency, high endurance version of traditional Nand. Toshiba has announced a similar product called XL-Flash. Both these offerings are aimed at the performance level of Optane, but at lower cost and higher capacity. This makes them more suitable as a tier of storage rather than a small cache.

Of course, Optane can still be used as a tier of storage, although NVMe SSD devices are still relatively low in capacity – 375GB and 750GB from Intel. The cost is also relatively expensive compared to other media, which means the use case needs to justify the additional expense.

Future directions

The storage hierarchy looks to be diversifying, with NVDimm-connected SCM at the high end, SCM and Z-Nand NVMe products in the middle and TLC/QLC NVMe/Sata drives for capacity.

Slowly, but surely, hard drives are being pushed out of the datacentre, with use cases being limited to large-scale archive and backup.

As all-flash systems came to market, perceptions grew that storage was moving to a single, consistent level of capacity/performance, but it’s clear that reality couldn’t be further from the truth. Tiering and caching will continue to be a major feature of future persistent storage design.

Read more about solid state storage

Read more on Data centre hardware

CIO
Security
Networking
Data Center
Data Management
Close