Feature

ISCSI vs. FC performance: A closer look at storage


What weighs more: one pound of bricks or one pound of feathers? Which is faster: 2 Gb FC or 1 Gb Ethernet? Hint: Both questions have the same answer.

In storage, an area that is often misunderstood is iSCSI performance and how it compares to FC. Both of these SAN interconnects are typically measured by bandwidth with "2 Gb" FC SANs dominating the storage market today and "1 Gb" Ethernet used for the majority of iSCSI SANs.

More storage blogs
Read what all of our expert bloggers have to say on data protection, storage networking and more. Click here.
Which would you say is faster: a 2 Gb FC connection or a 1 Gb Ethernet connection? It's a trick question -- they are equally fast. They both transfer data at the speed of light. Bandwidth is not an issue of speed but size. Consider the following analogy; think about a four-lane highway versus a two-lane highway. If there are just a few automobiles traveling on either highway drivers will be able to go the maximum speed. However, as more drivers travel on each road, the two-lane highway will experience a bottleneck before the four-lane highway does.

This is the same with FC and Ethernet. A 2 Gb FC interconnect has twice the bandwidth (double the number of lanes) of 1 Gb Ethernet. Bandwidth has an impact on storage performance when large requests are being processed. In this case, most of the work is spent transferring the data over the network making bandwidth the critical path. However, for smaller read and write requests the storage system spends more time accessing data making the CPU, cache memory, bus speeds and hard drives more important to overall application performance.

Unless you have a bandwidth intensive application (e.g., streaming media or backup data) the difference in performance will be minimal. Enterprise Strategy Group (ESG) Lab has tested storage systems that support iSCSI and FC and the performance difference is minimal -- ranging between five and 15%.

In fact, an iSCSI storage system can actually outperform a FC-based product depending on other, more important factors than bandwidth -- including the number of processors, host ports, cache memory and disk drives and how wide they can be striped.

The slowest component of the storage performance chain is the hard disk drives. It takes a hard disk drive much longer, sometimes several thousands-percent longer, to access data in a storage system than the electronic components like processors, bus and memory. The timeline for an I/O starts with a read/write command being sent to the hard drive from the application. This is followed by long, mechanical access times waiting for the drive to move the actuator, referred to as the seek process. The seek process is by far the slowest part of storage performance. The actuator then has to spin to the data that's been requested, which is another long mechanical process that creates latency. Next, the data is transferred from the drive to the CPU and a status handshake is performed to terminate the request. Access time associated with all disk drives, which includes seek + latency, is clearly responsible for the majority of the "wait time."

Traditional storage systems are typically limited in the number of drives across which they can stripe data. Many traditional storage systems can only stripe up to 16 drives, while more advanced products can stripe across hundreds of drives. Striping data across a large number of drives, allows a system to leverage all of the actuators which work in parallel to make read/write functions a much more efficient process. Striping data across many drives increases performance and essentially eliminates the need for tuning performance and determining hot spots. Naturally, there is a cost associated with acquiring more hard drives, so a balance and consideration of price/performance is important.

In ESG Lab head-to-head testing, we configured a storage system using traditional striping methods and another one using wide striping. ESG Lab used the same workloads to compare the performance of the traditionally configured system and that of a system using a wide stripe group of 48 drives. The stripe group of 48 drives significantly outperformed the traditional method.

A comparison of Iometer results revealed a 44% improvement in the number of disk I/Os per second when switching from traditional volumes to a 48-drive wide stripe group. That is an amazing performance difference, much more than the five to 15% difference that we found between iSCSI and FC.

Some iSCSI storage systems may not have well-tuned performance optimized iSCSI target drivers. This is the fault of the storage vendor and they need to go back to their R&D group and do a better job. Additionally, ESG Lab has found that using a TCP/IP offload engine (TOE) on the iSCSI target port within the storage system can have a measurable positive impact on performance. Some iSCSI storage systems do not have integrated TOE support.

The architecture of the storage system, the speed and number of processors, the amount of memory and the intelligence of its caching algorithms, the speed the disk drives and number of drives in a stripe group, the number of host ports and the backend interconnect all play a major role in performance. I recommend that you evaluate the storage system based on all of the above criteria. It is the storage system itself that will make a bigger difference. The speed of iSCSI is not the issue.


About the author: Tony Asaro is the senior analyst for Enterprise Strategy Group.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in March 2007

 

COMMENTS powered by Disqus  //  Commenting policy