Zoning in on better storage performance and capacity

The fundamentals of data storage are undergoing a significant evolution, and many – if not most – IT administrators won’t even have noticed yet. 

Called zoned storage, it’s a shift away from host servers simply handing data over to the storage device, and letting that device figure out how to write it and where, towards the host playing a role in data placement decisions.

Compared to random-access drives, zoned storage devices have different writing rules. Zones can only be written sequentially and starting from the beginning of the zone. In addition, data within a zone cannot be erased or overwritten without erasing the whole zone.

Although for today’s storage technology it’s an evolution, in some ways it’s a return to the past. In this case, to the days when storage was host-managed, rather than today’s self-managed and semi-virtualised devices.

Why some storage and database developers see zoned storage as the future

While hyperscale companies still do host-side data management, the arrival of RAID made this kind of thinking redundant for the rest of us, and then solid-state storage and Flash memory seemed to have killed it off completely. Using silicon chips instead of spinning disks not only made drives more compact and power-efficient, but they no longer had motors or physical read/write heads. So all the delays involved in swinging the head and then waiting for the right disk sector to spin underneath it simply vanished.

Except for two factors: first, disk drives did not completely disappear, and second, there is a lot more to using Flash efficiently than making it pretend to be a disk drive! And in both cases, it was the need to support random access that was making it harder to achieve higher capacities and greater performance.

On the disk side, the latest high-capacity SMR (shingled magnetic recording) hard drives must be written sequentially and erased in blocks – the file system can’t drop in and edit or erase a bit in the middle of a track, as it could with older disk technologies.

Flash too must be erased in blocks, so edits involve re-writing the edited block elsewhere before erasing the original. Among other issues, this leads to performance and capacity-sapping phenomena called write amplification and garbage collection. The result is that even NVMe cannot take full advantage of the full capability of modern non-volatile storage.

NVMe adopts Zoned Namespaces

The proposed solution for NVMe SSDs is therefore to define zones on the drive, as is already done for SMR disks, via a specification called Zoned Namespaces (ZNS). This is being developed by the NVM Express consortium and is based on earlier work by Western Digital.

ZNS adds new commands that allow the host storage stack to work with zones, as this is simpler and more efficient to implement if the host takes on at least some of the responsibility for data placement. Some Linux versions already support ZNS, and ZNS-capable drivers are available for database platforms such as MySQL.

Of course, random access won’t be ignored – it will be possible to define a separate random-access namespace on the same SSD. But an application or file system that can work with a zone-block device (ZBD) to align its data to the physical medium should gain both storage performance and capacity.

We could go a lot deeper into zoned storage, which is in its early days and is still working up momentum. In particular, more software support is needed, of course. This introduction might give you enough for now though, plus there’s links above for further reading.

CIO
Security
Networking
Data Center
Data Management
Close