ISCSI SAN technology is changing the economics of SAN affordability. Not only is Fibre Channel SAN technology expensive, but deploying and managing it is complicated. This has made SANs impractical for small and medium-sized businesses (SMB), and even enterprise data Centrex budgets limit the number of servers or storage systems that can be added to a Fibre Channel SAN.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
But i SCSI changes all that, supporting block-based SAN using existing Ether net technologies that are far less expensive, not to mention more broadly understood by IT professionals. Today, countless small and medium-sized enterprises are adopting i SCSI, and large enterprises are deploying i SCSI SANs for remote offices, workgroups and departments.
Still, i SCSI can coexist with Fibre Channel SANs through the use of i SCSI routers or bridges to connect the two architectures. This coexistence of i SCSI and Fibre Channel may be intriguing to large organizations that have invested in Fibre Channel infrastructure but plan to adopt i SCSI over time.
Here are eight best practices for i SCSI SAN deployment and integrating i SCSI with Fibre Channel.
Best Practice No. 1: Weigh the need for i SCSI vs. Fibre Channel
While i SCSI does provide notable benefits for SAN users, not all users will benefit equally from i SCSI in their enterprise. Even though i SCSI is almost always cheaper than Fibre Channel and usually performs as well, that doesn't mean i SCSI is always appropriate.
Still, there are many cases when i SCSI would make a smart add-on to an existing Fibre Channel storage infrastructure, such as deploying a large number of low-end host devices. "If you really want Fibre Channel, but you also have 100 Windows servers that you need to add to the SAN, and you (can't afford to buy all the Fibre Channel host bus adapters (HBA), then use i SCSI," says Stephen Rosetta, director of data practice at Contour al. The trick is understanding the role that i SCSI will play in your particular environment, weighing the management demands and determining if there's adequate justification for "blending" the two SAN technologies.
Experts also note there is a point of inflection for both i SCSI and Fibre Channel SANs. Although it's probably not worth adding i SCSI for just a few devices, it might make economic sense to replace small Fibre Channel deployments, especially those not providing critical performance benefits for key applications, with i SCSI to create a single fabric, rather than preserving two SAN types in the business.
Best Practice No. 2: Don't assume Fibre Channel outperforms i SCSI
Some companies avoid i SCSI because of its reputation for so-so performance. But in actual practice, Fibre Channel and i SCSI are equally capable of supporting the same block-based applications. According to research from Enterprise Strategy Group (ESG), half of the early adopters of i SCSI are using it for mission-critical applications -- a strong statement of support for i SCSI reliability. Only the most bandwidth-intensive applications might push Ether net into a bottleneck. These include OLTP applications that handle a large number of small transactions and could suffer a performance penalty due to packet overhead in the IP environment.
Best Practice No. 3: Don't hold off for 10 Gbps Ether net
Some IT professionals are delaying the adoption of i SCSI until 10 Gbps Ether net is readily available. The assumption is that the added bandwidth promised by 10 Gigabit Ether net (GigE) will improve i SCSI performance. But since i SCSI adopters are already seeing adequate performance using the current 1 Gbps connectivity, moving to 10 Gbps would have little effect on application performance. There's no need to wait on 10 GigE. You should probably evaluate i SCSI even if there's no immediate need for it.
Best Practice No. 4: Consider the use of TOE HBAs for i SCSI
Although i SCSI can be deployed using conventional Ether net NICs and switches, you might want to consider more specialised i SCSI SAN NICs and switches. The latest generation of NICs include a built-in firmware-based initiator and may incorporate TCP/IP Offload Engine (TOE) capability. A TOE performs much of the i SCSI command processing directly on the NIC, improving i SCSI performance by off loading low-level processing tasks from the host's main CPU. You can also use Ether net (i SCSI) switches with high-performance low-latency ports to improve data transfers across the i SCSI SAN.
Best Practice No. 5: Take i SCSI security into account
Contrary to popular belief, i SCSI SANs can be more secure than Fibre Channel SANs. For example, i SCSI establishes security through advanced authentication methods, such as CHAP. According to Rosetta, CHAP is "a much more secure method" and is "super simple to set up because people have been using CHAP in the IP world for a decade."
The authentication protocols native to Fibre Channel are rarely used. Many SAN architects instead rely on the relatively isolated nature of Fibre Channel fabrics and the complexities of LUN zoning and masking to keep SAN data secure. Furthermore, Fibre Channel does not support native encryption over the wire, but i SCSI can utilise IP Sec encryption to protect data in flight.
One weakness of i SCSI security lies in the proliferation of Ether net networks. It's a bad idea to pass storage data on the same network that handles everyday user traffic because your secure SAN data can "leak out" over the LAN ... possibly onto the Internet.
As a result, network designers must isolate i SCSI SAN data from Ether net user data. One way they can accomplish this is by building a different LAN dedicated to i SCSI use, but it's usually more practical to partition networks logically. For example, you could establish an i SCSI SAN using a VLAN, which carves up the physical LAN into a logical portion used exclusively by the SAN. This allows storage administrators to regulate and guard the traffic that the VLAN carries.
Best Practice No. 6: Look for mature initiators
Most network activities are operated and directed through software drivers. In the i SCSI realm, the drivers that reside on host servers are known as initiators. Their performance is often limited by poor coding methodologies or inadequate testing. Experts suggest that you use mature initiator software from the NIC vendor or a recognized source, such as Microsoft.
You can also rely on the firmware that is hard-coded into i SCSI NICs, such as Alacrity's SES2100 Accelerator card, the Magic 2028-4P 1 G b Copper TCP/IP Accelerated NIC from LeWis Communications and the Logic QLA4050C i SCSI HBA. The coming shift to 10 GigE may require the use of hardware-based initiators.
Best Practice No. 7: Make sure vitalization software supports i SCSI
It's common to see i SCSI performance bottlenecks in virtual servers. Even though you can easily add more virtual machines to a physical server, you're stuck with the same amount of server resources, such as CPU, memory and I/O. Many virtual machines demand significant storage access, so vitalization should maximize the utilization of your i SCSI NICs, particularly TOE cards.
However, make sure that the vitalization software fully supports i SCSI. Rosetta reports that current VM ware versions do not currently support i SCSI HBAs. This issue should be resolved soon, but in the meantime, there are no real workarounds to this type of problem except to employ Fibre Channel in the vitalized environment.
Best Practice No. 8: Embrace i SCSI's new storage management features
Vendors are incorporating ambitious feature sets into new i SCSI products that may not have been affordable (or even practical) in comparable Fibre Channel products. This new period of i SCSI creativity is spawning integrated features, such as thin provisioning, sub disk RAID and automated tiered storage, which are a real boon for i SCSI SAN adopters.
In addition, i SCSI arrays are also noted for their salability, making it easy to buy and deploy additional i SCSI arrays over time with little, if any, direct management