Why is that? It's for the same reasons you don't get SSDs in desktop PCs: Windows can't take advantage of their speed enough to justify the cost and there is no standard architecture for incorporating them as a large and fast cache between a server's main memory and the hard drives.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
If servers could use SSDs better, then the SSD performance/cost balance would be more favourable and vendors with SSD-enhanced servers would be in a better market position. Sun, Dell, HP and IBM are all making positive noises about this, with Sun releasing the most detail. Sun says that system software, like Solaris, MySQL and ZFS, use hard drive storage for storing working data. If this data could be stored in SSDs instead, the servers would spend less time waiting for system software I/O and more time running apps -- which is the whole reason they are there in the first place.
Sun is aiming to fit its servers with SSD caches and update its software to use them. It has also developed extended-life flash with Samsung as a means of overcoming the limited write endurance problem. The other server vendors haven't made noises around system software changes, but are all keen on server-side SSD kit. Intel's SSD launch was accompanied by enthusiastic supporting messages from HP and (surprisingly, given the Samsung deal) Sun.
HP and IBM are keen on Fusion-io ioDrive SSD cards. Indeed, IBM's Quicksilver project has demonstrated the use of these cards in a specially modified SAN Volume Controller (SVC). This modified version of SVC had the ioDrives hooked up to its PCIe bus and reaching 1 million IOs per sec (Quicksilver indeed). The SVC software was modified, implying that IBM understands the need to change system software for SSD use.
The standards problem is a hard nut to quickly solve because even if there were a standard way to add SSDs to industry-standard servers, the operating systems still need modifying to use them. From the OS point of view, SSDs are not simply drop-in hard drive replacements, as the OS reads and writes to them as disks with small size disk blocks and not the larger SSD page size. It means that Windows Server, HP-UX, AIX and both Red Hat and Novell's SuSE Linux will need SSD extensions to detect SSD-enhanced server hardware and modify their I/O code to use them. Ditto the main system software items.
Without server SSD architecture standards, independent middleware vendors, like EMC with VMware, Oracle, SAP and so forth, will be less able to modify their server I/O code to use SSD caches because they have no hardware standard to code to.
Against this background the SSD multi-level cell (MLC) capacity increase issue and differential read:write speeds seem much lower priority issues.
You can do something with faster SSD controllers to overcome server OS SSD inefficiency but the biggest boost will come from OS modification. Sun will have an easier time here because it has direct access to the software involved and makes its own SPARC servers, whereas the other server suppliers don't, unless it's their own Unix but then they still have the architecture problem to deal with.
Even so, the HP-UX and AIX engineers may be busy adding SSD I/O code modules to these products and hoping for an SSD-based increase in their sales versus Windows.
SSDs don't represent a silver bullet for server hard drive I/O problems. If and when it comes, the bullet will have two parts to it: a largely Intel-driven server SSD hardware standard, and a Microsoft-driven Windows Server SSD I/O service pack or major release. Don't expect either until the second half of next year is my guess, with buyable product in 2010. If Sun can get its act together before then, its SunFire servers could make a lot of hay while the other server vendors languish, SSD-less, in its shadow.
Chris Mellor is storage editor at The Register