In the podcast, Steve lists the questions he is hearing most often from storage managers in the UK pertaining to SANs and SAN hardware. Below, you can read his answers to these frequently asked questions regarding SAN hardware or download the podcast itself.
Table of contents:
If my SAN has limited expansion capacity, should I replace the entire SAN or crate a new one for growth only?
How do I change my SAN vendor?
What size drive should I use for my new storage arrays?
How do I get data from my current overloaded SAN to a new array?
That's a difficult question and depends on the status of your current SAN. If your SAN is obsolete, your only option is to replace it. If your current SAN is viable, you have the choice of replacing it or creating a new one you can use for growth.
You also need to take into account how big your current SAN is. If it's a large SAN, the recommendation may be to create a new one, because when you have a firmware upgrade, for example, you have to take down one half of the fabric at a time, upgrade the firmware on the switches on that fabric, bring it back up and then do the same to the second fabric.
As a SAN gets larger it impacts on more hosts, so you have to liaise with your app owners. Because while you're upgrading half the SAN, they won't have anything like dual pathing and they'll lose resilience.
You also have to determine if there is a large amount of storage on the current SAN, and if there is, can you move it to the new SAN? Because you don't want to be in a situation where you leave a large amount of free capacity on a SAN which is effectively just wasted.
Also, if you don't currently have something like a test or development SAN, you may want to downgrade your current production SAN to fulfil that role, then buy the newer version of the switches for your production environment.
Historically the interoperability of fabrics between two vendors was almost impossible. Now, with the implementation of standards, you get a base level of functionality where switches from different vendors will interoperate. What you generally get is that vendors provide enhanced features over and above the standard set of tools which they use as a unique selling point for their switches. If you're using those switches, that may provide a barrier to interoperability between switches of one vendor and those of another.
You also have to look at the skill your personnel have. You might get a very good deal on SAN and switches, but do you have the skills to implement, support and troubleshoot such an environment?
New arrays come with a multitude of drive sizes, from 73GB drives, which are currently being phased out, up to terabyte drives in SATA. One consideration is that higher-capacity drives have less performance per gigabyte but will be cheaper to purchase and maintain per gigabyte. That is, an array has a finite level of performance, so if you have an array with 100 TB of storage, the performance you'll get from that array will be the same whether you're using 20 TB or 100 TB. This is particularly true for midrange storage arrays.
You may have applications that are very-resource intensive but not use a lot of storage so you have to make sure you don't just add servers and connect them to an array without checking whether the array's performance characteristics will create a bottleneck.
Different drive types have different performance characteristics, such as the difference between SATA and Fibre Channel drives. We've had some customers attempt to use SATA drives for production environments that are resource-intensive and they generally find Fibre Channel drives deliver a higher level of performance. They generally use SATA drives for backup, archiving and less critical applications.
There are other considerations. What is the total capacity you need and the throughput for that capacity? For the capacity that your organisation needs, do you require more performance than one array will give you? In instances like that, you might want to have lower-capacity drives and spread capacity over more than one array. This will make performance per gigabyte higher and allow you to service your applications correctly.
One thing that people should take into account is capacity planning. If you do it correctly, there should be a limited requirement to move data from one array to another.
That's a key issue in SAN environments and one that quite often catches out organisations. You have three options if an array is getting near its capacity.
- You can virtualise that data on the array behind another array.
- You can migrate data using hardware replication.
- You can use software migration.
Hardware replication requires minimal host configuration. One caveat is that it's quite difficult to set up hardware replication. If the source and target are the same, then it's no problem. If they are from different vendors or even different devices from the same vendor, it can be very difficult indeed.
If the hardware is on the same SAN, that can ease the replication path and if the arrays have free ports on them, we can do a direct connection between the arrays and transfer data that way. Once you've set up their replication, the server in question will need to be shut down, pointed to the secondary array, then re-booted to see the new array. More storage can then be added to the new array.
Software migration is more host-intensive and will affect applications more than a hardware solution. There are two ways to do that. The first is a software mirror, using a volume manager of your choice to create a mirror between the LUNs on the first array and the second. When the data is synchronised, you would remove the original LUNs from the mirror and move them off the host. The advantage is that the application doesn't have to be taken down.
The second way is a simple data copy from the source LUN to the target LUN, but unfortunately this type of copy is a point-in-time and will involve the shutdown of the application while the copy takes place. Once the copy has taken place, the application has to be pointed at the new LUN and the older LUN removed.
About the author: Steve Pinder is a principal consultant at GlassHouse Technologies (UK) Ltd. He has more than 11 years experience in backup and storage technologies and has been involved in many deployments for companies of varying sizes, with responsibilities ranging throughout the sales and deployment lifecycle.
This was first published in December 2008