Article

Solid-state cache set to speed up server data access

Chris Mellor, Contributor

Several vendors -- including Intel, SanDisk and Microsoft -- are trying to harness the rapid read/write capabilities of solid-state cache or flash data cache to smooth internal data PC transfer bottlenecks, but it looks like Sun Microsystems is set to take the lead with cache support in its server operating systems (OSes).

Server processors regularly have to make the equivalent of a pit stop. A server CPU with its many cores has to scream through its workload and then head into the pits where its data is dumped to disk and it's refuelled with fresh data from disk. For a CPU tearing through workloads at gigahertz clock speeds, waiting for disk is an embarrassingly slow interruption.

If Sun gets it right, we might look forward to Storage Performance Council benchmarks later this year that show Sun Solaris servers wiping the floor with competing Linux and Windows ones.
,

A generic server's processor core has its own data stores -- registers -- and a level 1 (L1) cache. The level 2 (L2) cache is between the cores and acts as a buffer between the very fast CPU and the slower main memory, which is level 3 (L3) in the hierarchy. Dynamic random access memory (DRAM) data access speed is much faster than disk, which is level 4 (L4) in the hierarchy.

Every time a fast access device has to get data from a slow access device, it must wait for the data to come in. A simple way to see this in action is to look at the Windows Task Manager when you're doing word processing on your PC. You'll see that it's mostly idle, and usually springs into action only to register a newly typed character.

To mitigate this, you must use cache. Modern CPUs have large caches that help the CPU minimise the number of main memory accesses it has to make for data. Caches are small, fast and very expensive vs. the next slower, higher capacity and cheaper member of the data access hierarchy. That's why there hasn't been a cache in a server (or PC) between the main memory and disk. It's a pity, because main memory operates at lightning speed vs. the relatively glacial pace of disk.

But the reduction in flash memory costs means flash memory caches can be inserted in servers to reduce the number of disk access applications made. Server vendors can now contemplate adding flash data caches, miniature solid-state drives (SSDs), to their servers with, say, a Peripheral Component Interconnect Express (PCIe) bus connection and software to make use of them, meaning system-level operating software.

Intel introduced flash memory on motherboards with its mini-PCIe card Robson technology in 2005, and showed a laptop booting up almost instantly. But the success of this demo wasn't reflected in real-world performance.

SanDisk showed its similar SanDisk Vaulter Disk technology two years later. SanDisk Vaulter Disk has capacities of 8 GB and 16 GB vs. Intel's 512 MB and 1 GB. Yet neither product has set the world on fire, and attention has moved to replace the hard disk altogether in netbooks. In a year or two, notebooks with much larger flash solid-state drives are expected.

The basic principle seemed solid. But Windows and Linux operating systems need to use the flash data cache and, at boot time, unless the operating system resides in the cache, files must come from disk; however, caching only works once the cache is populated. While Microsoft's Windows Vista comes with Windows ReadyBoost (USB thumb drive caching) and Windows ReadyDrive (hybrid flash/hard drive support), the real-world benefits aren't that worthwhile. Vista is now seen as a lacklustre and resource-hogging replacement for Windows XP. Perhaps Windows 7 will be better.

These types of server caches shouldn't be used for faster booting, but for quicker overall performance through caching temporary data and working set data. In addition, they should store disk write data until it's absolutely necessary to send it to the much slower disk. System software needs to be coded to use it, and frivolous add-ons such as USB-connected flash thumb drives shouldn't be used as caches.

Sun Microsystems has been quite informative about how it intends to use flash data caches in its servers. It's put a lot of storage system processing responsibility into its ZFS file system, which removes the need for RAID controllers and has ZFS doing the data protection work. ZFS working data, which is currently stored on disk, will be stored in separate read and write caches. Sun makes a distinction between read-intensive working set data and write-intensive data, and aims to put them in separate caches optimised for read or write speed. The read flash would be a level 3 cache, with level 4 being the write flash.

Sun's servers will be designed and built with such caches so they can run Solaris, ZFS and other Sun system software as fast as possible by avoiding disk accesses -- CPU pit stops -- and keeping the processor cores zipping along as continuously as possible. The firm's system software will use flash data cache for micro pit stops, which will reduce the number of major pit stops -- disk accesses -- it needs to make. It sounds good, but we'll have to see how it pans out.

If Sun gets it right, we could see Storage Performance Council (SPC) benchmarks later this year that show Sun Solaris servers wiping the floor with competing Linux and Windows ones. It will be interesting to see if Microsoft and VMware hypervisors evolve to use flash data caches. I reckon we'll need an industry standard defining x86 server flash cache use for this to happen, as both hypervisors are written only for industry-standard servers, which isn't the case for Solaris and its container technology.

Chris Mellor is storage editor with The Register.


 

COMMENTS powered by Disqus  //  Commenting policy