iSCSI network configuration, design and optimization
It's a must to consider more than just cabling when designing an iSCSI-based storage network. Learn to look at the bigger picture when configuring or optimizing an iSCSI-based storage network.
iSCSI learning tip: It's a must to consider more than just cabling when designing an iSCSI-based storage network. Learn to look at the bigger picture when configuring or optimizing an iSCSI-based storage network.
iSCSI, having evolved from hype to reality after it was approved as an industry standard in 2003, continues to evolve with more features, available solutions, interoperability and actual customer deployments now measured in the thousands.
To date, the StorageIO group sees that around 85-90% of iSCSI deployments have used software-based iSCSI initiators and standard Ethernet NICs. While software-based iSCSI deployments are still in the majority, the growth in adoption of TCP offload enabled (TOE) enabled iSCSI adapters is on the rise along with early 10 Gigabit Ethernet deployments. The decision to use a TOE adapter is tied to what level of server and I/O performance you need. If your servers are already overloaded, TOEs can help defer a server upgrade; on the other hand, if you have adequate server performance and network bandwidth, then a TOE enabled adapter may not be needed until a later time. Check with your vendors to verify functionality relevant to your specific configuration.
Storage systems including disk arrays and tape libraries support iSCSI either natively with some number of built-in Gigabit Ethernet ports or via external iSCSI routers (bridges or gateways). Some iSCSI enabled storage systems support concurrent iSCSI block-based and NAS file-based access while others support a mix of iSCSI and Fibre Channel (FC), or a combination of iSCSI, FC and NAS. On the other hand, some vendors support iSCSI and Fibre Channel, however, not concurrently.
Directly connected iSCSI, using point-to-point Ethernet between a server and an iSCSI router or iSCSI storage, would be applicable for environments that do not need shared iSCSI connectivity. An example is attaching a Windows-based server running Microsoft Windows Storage Server (WSS) software functioning as a NAS filer and data server with a dedicated iSCSI storage array.
Single- or multi-path access to iSCSI storage is a decision that balances cost vs. performance and eliminating single points of failure (SPOF). For smaller servers or less critical applications, single attachment of servers may be sufficient to meet your service level objectives for uptime and data access. For the relatively low cost of a second NIC or HBA combined with host-based path management software, you get in return automated failover, load balancing and eliminate a SPOF.
Your decision to use a shared or public network for iSCSI will be influenced by security and performance considerations -- which, in turn, are influenced by cost. At a minimum, logically isolate iSCSI traffic using VLAN and other techniques along with security features including encryption and VPNs.
Looking to the future, 10 Gigabit Ethernet should certainly provide a performance boost in raw theoretical network line speed; however, software will be the key to using new technologies such as iSER and RDMA. Look beyond the wire or optical cabling when designing and deploying iSCSI-based storage networks considering server, storage and software capabilities.
About the author: Greg Schulz is founder and senior analyst with the IT infrastructure analyst and consulting firm StorageIO. Greg is also the author and illustrator of "Resilient Storage Networks" (Elsevier).