Cybrain - Fotolia

Cache vs tier: What’s the difference between cache and storage?

We look at the key distinctions between cache and tiers of storage, and where the line has become blurred with fast flash, 3D Xpoint and storage class memory technologies

The ascension of flash storage to mainstream datacentre technology status has brought with it questions about where and how exactly it can be used – and one of the big questions that comes up is over the use of flash media as cache and as a storage tier.

In this article we’ll try to clear up the distinction in cache vs tier, but also point out where things get blurred.

Hardware cache exists at numerous levels in the IT infrastructure. It is needed because of the mismatch that exists between components. In input/output (I/O) terms some parts of the IT infrastructure simply can’t read or write as quickly as others, so there needs to be a buffer between them that can handle the more rapid ingress and egress of data.

Its essential function is to provide a rapidly-accessible location for data that’s needed for currently running operations, or at least its most important chunks.

You’ll find its most performant and expensive variety in the central processing unit (CPU). Then – dropping down a magnitude in terms of performance – there’s RAM, but this too can have a portion of SRAM, which serves as a more quickly accessible portion of the memory added to the bulk of the DRAM media on the card.

Below RAM there could also be some storage class memory, although this could be cache or form a tier, as we will see.

Further away in the architectural tree, there is also likely to be some cache on bulk storage. Whether SSD or spinning disk HDD, there is often some cache here to even out the stream of I/O as it hits the bulk storage media. This really falls outside the main part of this discussion. But, wherever it resides there are a number of key characteristics to cache.

Cache is a copy

The main one is that it is almost always a copy. That’s because the extremely high-performance hardware used is almost always volatile, and data would be lost if power was removed.

Mostly too, data will be byte-addressable. That means that the media can be written to and from at the level of a single character. That’s a key characteristic of memory, and what distinguishes it from storage.

But while those are the things that define cache at a technical level, in terms of how it’s used, the key attribute it has is to provide a temporary location for data that may need to be rapidly accessed (or which can’t be written quickly enough to its final destination).

First, let’s look at some basic cache types

Write-through, write-back, write-around cache

Cache type – where you can set it, such as from a storage system or an application – is divided into some key types. They include:

Write-through cache: writes to cache media and to underlying permanent storage simultaneously before confirming the operation to the host. Here, data is super-safe because it’s written to a shared storage array, but the disadvantage is that the initial I/O experiences latency based on writing to that storage. It’s good for use cases where data is written and then will be re-read frequently.

Write-around cache: by contrast, sees I/O written directly to permanent storage and bypasses the cache. This avoids the latency of writing to cache and permanent storage but if data is needed soon after it is committed then it’ll have to be fetches from bulk media and you may experience a “cache miss”.

Write-back cache: where write I/O goes to cache and is immediately confirmed to the host. Data is then staged off to bulk media subsequently. This gives low latency and high throughput but is potentially vulnerable to data loss because for some period of time data exists only in cache.

Storage: Bulky, block-addressable and protected

While you could argue that any data held on any media for any amount of time constitutes a form of storage, for our purposes that’s not the case.

Cache is distinguished by being an ephemeral copy of a relatively small amount of currently in-use data for rapid access or to buffer writes.

Storage, by contrast, sits behind/below cache and provides the bulk, long term retention location for data and is addressed in blocks rather than bytes.

It can be of a wide range of performance and access characteristics – from relatively slow spinning disk HDDs to very rapid flash media.

Read more about storage tiers

That’s how tiers of storage are defined. For example, where an organisation keeps the bulk of its data on slower spinning disk media, while current operations run from a faster layer of solid state drives. And this tiering can be set up to run automatically, with data moved in and out of tiers that perform differently depending on usage profile.

It’s the rise of SSDs at the higher end of the performance envelope that tests these definitions.

Namely, does the use of fast flash media as a layer between bulk storage and RAM/CPU constitute cache? Or is it a tier of storage? Well, the answer is, it depends.

More than ever now it is possible to interpose a high performance layer of solid state media, such as NVMe-connected flash or 3D Xpoint, close to where processing takes place.

But for the most part that will constitute a tier of storage, whether designated tier 1, tier 0 or whatever. That’s because it’ll be block addressable storage and it won’t be a copy, except in that it is protected by Raid and is backed up.

The line becomes apparently blurred with storage class memory, which can use, for example, Intel Optane fast solid state media, which can operate in byte-addressable mode (and therefore as cache) or block-addressable (and therefore storage).

Read more on Datacentre capacity planning