ekzman - Fotolia
The old adage “tape is dead” seems to be revived annually in the storage world, but in reality tape storage product innovations continue apace. In this article, we will examine tape storage product evolution over the past 12 to 18 months and look at where it is headed in the future.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
At the core of every tape storage system is the tape storage drive. In the enterprise, there are three major tape drive formats. There are new products for each of the formats. LTO (Linear Tape Open) drives, developed by the LTO consortium (Hewlett-Packard, IBM and Quantum) are resold within tape libraries or as standalone units by tape vendors such as HP, Dell, IBM, Oracle and Quantum. In addition, IBM and Oracle retain their own enterprise tape storage formats: IBM from its mainframe heritage and Oracle as a result of its acquisition of Sun/StorageTek.
LTO tape storage generations
In line with prior drive generations, LTO-5 doubles the previous capacity specification, able to store 1.5 TB of data in native format and having a quoted 3 TB capacity using an average 2:1 compression. Clearly, compression ratios can vary and depend on the nature of the data being written to the device. Data transfer rates have now reached 140 Mbps native and again are quoted as double that (280 Mbps) for compressed data.
LTO-5 introduced the concept of tape partitioning, which divides the media into two separately writable areas. From this, IBM developed a feature known as LTFS, or Linear Tape File System, which we will address below.
The LTO consortium in June released specifications for LTO-6, the next generation of drive and media. LTO-6 is expected to have a capacity of 3.2 TB native (8 TB compressed) with transfer rates of 210 Mbps native and 525 Mbps compressed. In this instance, compression ratios have been improved to 2.5:1 as a result of an increase in size of the compression history buffer.
HP offers LTO-5 tape storage drives across its range of libraries, including the enterprise-class ESL G3 tape library. Dell also provides LTO-3, LTO-4 and LTO-5 drives across its range of tape libraries, from the entry-level PowerVault 124T to the ML6030 enterprise-class system.
IBM 3590 and 3592
Although IBM is part of the LTO consortium, it also maintains its own separate tape product line, based on the older 3590 (Magstar) and 3592 (Jaguar) products. These originated from the mainframe platform and were successors to the original 3480 tape cartridge format.
IBM released the TS1140 tape storage drive in May. It has a capacity of 4 TB (native, uncompressed) with a sustained data transfer rate of 250 Mbps. Although originally developed for the mainframe, the TS1140 drive supports Unix, Linux and Windows platforms.
In conjunction with its TS1140 release, IBM added the TS3500 tape library earlier this year. The TS3500 scales to 16 frames, 192 drives and more than 20,000 tape slots per logical library. Multiple TS3500 tape libraries can be connected to create a complex system with support for 2,700 drives (both LTO and 3592) and more than 300,000 tape cartridges. Using 4 TB TS1140 drives, this configuration can hold a staggering 1.2 exabytes of data.
Oracle T10000 series
Oracle got into the tape business when it acquired Sun Microsystems, which had bought StorageTek in 2005. Oracle’s latest tape drive is the T10000C, the third drive in the platform first released in 2006. Using the new T2 tape media, the drive is capable of storing up to 5 TB of data in native uncompressed format with a transfer rate of 240 MBps.
Oracle extended its SL8500 libraries in mid-2010 to enable 10 libraries to be linked for a total tape slot capacity of 100,000 cartridges and 1,000 PB, or 1 exabyte. This also supports as many as 640 tape drive units, including LTO and T10000 drives.
Clearly, the move toward “big data”—the buzzword given to organisations’ large amounts of unstructured and semistructured data—has driven significant scaling in drives and libraries. Tape has traditionally been a sequential access medium that makes it unsuitable for anything other than data archiving and backup. However, the introduction of media partitioning in LTO-5 has enabled tape to be accessed in a new way, using IBM’s LTFS.
LTFS uses the two partitions on an LTO-5 tape drive to create a data area (or data partition) and a much smaller index area (or index partition), with both partitions read and written independently by the drive. The index partition contains the index and associated metadata on files in the data partition and is loaded into memory on a host machine when a tape is mounted on a drive.
IBM and HP offer host software for LTFS on Unix and Windows platforms. An LTFS drive can be read from the host in a similar fashion to a standard hard drive but clearly at much slower speeds.
Although LTFS is unlikely to make tape access more random in the future, it does create a standard format for writing files to tape. Today we see proprietary formats from every backup software vendor. Wide adoption of LTFS could make writing to and from tapes across applications a simple task.
Tape continues to hold its own in the data centre, with drive performance and capacity keeping pace with hard disk drives. We can expect this growth to continue. The LTO consortium has already extended its roadmap to cover LTO-7 and LTO-8 formats, which will offer capacities of 16 TB and 32 TB (compressed), respectively. We will see the other major vendors follow in line with or exceed these figures over time.
Tape still offers the lowest price per gigabyte of all storage media and combines low-cost storage with longevity for archive information.
LTFS offers vendors a way to store data in a format-neutral way, independent of the proprietary formats in use today. If LTFS sees widespread adoption, then this will resolve many of the issues organisations have with media refreshes and retirement of old backup environments.
This means that tape storage drives are well positioned to help cope with the requirements of dealing with “big data.”