Article

LUN storage and its role in SAN management

Antony Adshead, UK Bureau Chief

This article explains logical unit number storage (LUN storage), as well as how LUNs relate to volumes, zoning and masking in SAN management.

Table of contents:

 OSes must speak with physical disk in a language they understand 
LUNs are logical drives 
LUNs are 'soft' partitions 
The definition of a volume 
SAN zoning and masking maintains security on the fabric 
Zoning for device-to-device authorisation 
Masking hides LUNs within a zone 
LUN scaling and performance 
LUN management tools 
 

 OSes must speak with physical disk in a language they understand 

The bedrock of storage is the disk drive, but an operating system cannot access disk drives without mediation via logical addressing that translates the physical characteristics of the disk — platters, heads, tracks and sectors — into a language the operating system can understand.

 LUNs are logical drives 

For that reason, storage subsystems have their physical disks partitioned into logically addressed portions that allow host servers to access them. This partition is called a LUN. For example, most PC users will be familiar with the partition of a single disk into a C: drive for applications and data, plus a D: drive for recovery purposes.

 LUNs are 'soft' partitions 

There is no 1:1 relationship between physical disk drives and LUNs. When provisioning storage, the administrator uses management software to create LUNs. They can create, for example, more than one LUN from one physical drive, which would then appear as two or more discrete drives to the user. Or they may create a number of LUNs that span several separate disks that form a RAID array; but, again, these will appear as discrete drives to users.

LUNs can be shared between several servers; for example, between an active server and a failover server. But problems can arise if a number of servers access the same LUN at the same time. There needs to be a method of ensuring data integrity because blocks are subject to change by the activities of those servers. For this, you need something like a clustered volume manager, clustered file system, clustered application or a network file system using NFS or CIFS.

 The definition of a volume 

LUN and volume are frequently used interchangeably. But it is worth noting that volume is also often used to describe groups of several LUNs created with volume manager software.

 SAN zoning and masking maintains security on the fabric 

Provisioning LUNs and volumes is only one part of storage provisioning. The storage-area network (SAN) fabric must also be configured so that drive arrays and LUNs are managed, and security on the SAN is achieved by ensuring only those servers that have authorisation can access specific LUNs. For this we use SAN zoning and masking.

 Zoning for device-to-device authorisation 

On a Fibre Channel network you can limit which storage subsystems and servers are able to see each other by putting them in the same zone when you configure the fabric switch. Zoning allows the specified servers to see one or more ports on a disk array. This allows minimum levels of bandwidth to specified subsystems to be reserved for certain servers; it also allows you to block traffic between others.

Zoning can be hard or soft. In a nutshell, hard zoning assigns a device to a zone by reference to a port; anything connected to that port is then in that zone(s). Soft zoning assigns a node to a zone according to its Fibre Channel World Wide Name (WWN). The switch places designated node WWNs in a zone and it doesn't matter what port they're connected to.

 Masking hides LUNs within a zone 

LUN masking adds a finer level of control to zoning. You may have zoned a server and storage subsystem together, but you may not want the server to see all of its LUNs. After the SAN has had zones configured, LUNs can be masked so that a server can see only the ones you want it to see.

If two servers were zoned to two LUNs — LUN_A and LUN_B — the two servers would see the two LUNs. But if we use LUN masking, we could restrict one server so that it sees only LUN_A and mask the other so it sees only LUN_B. Masking can be done in two places: at the array port, where any disks on that port will be seen by servers accessing that port; or at the server, which allows it to see only the LUNs assigned to it.

 LUN scaling and performance 

LUN performance and reliability will vary according to the disk or configuration of disks upon which they reside, so it is important to consider the physical medium and its characteristics when planning LUNs as part of storage provisioning.

For example, a LUN that resides on a Fibre Channel 15,000 rpm disk will perform better than an identical LUN on a 5,400 rpm SATA disk. Raid configuration also affects performance and reliability, so the characteristics of the RAID type used for LUNs need to be taken into account.

 LUN management tools 

An enterprise storage infrastructure might contain thousands of LUNs, so software tools are essential to enable efficient LUN creation, management and reporting. LUN management tools are widely available, with most storage vendors providing some management tools.

There are vendor-specific or generic tools available, and the choice will often come down to whether your shop uses products from a single vendor or is heterogeneous. It is worth noting that generic LUN management tools sometimes work better with a vendor's own LUNs.

LUN management tools should be selected so they support the whole storage provisioning process, including mapping to specific array ports, masking specific host bus adapters (HBAs), reporting functions and reclamation of storage that is no longer being used.

 


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
 

COMMENTS powered by Disqus  //  Commenting policy