What are the key steps in SAN migration?
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
The keys to a successful SAN migration project are the planning and development of repeatable processes.
Your first step is to develop a documented understanding of your current environment. This will provide the basis for decisions that need to be made as the project progresses.
You’ll need to gather the metrics of the data centre’s current state and place the findings into a central repository (a database or spreadsheet) by documenting the environment from a hardware, software and application perspective.
The hardware category should include makes, models and characteristics of servers, switches, network cards, host bus adapters, the various microcode and firmware levels associated with each, hardware connectivity to Ethernet and Fibre Channel switches, and port availability.
The software category should include the operating system and patching levels, software components (database installations, Web servers, etc., and the version levels associated with each) and hardware drivers.
Once the hardware and software components have been identified, you’ll need to identify application composition and dependencies. SAN storage allocation, RAID levels, port counts and ratios by host should be documented, along with provisioned versus utilised allocation. You may find opportunities for smaller allocations, alternative migration strategies or potential consolidation moving forward.
You’ll also need to have a clear understanding of the future state of the SAN. For a SAN migration to be successful and supported by the vendors involved, you’ll need to understand the data’s end-to-end path. Compatibility matrices (provided by the vendors) will be crucial in identifying requirements from the host through to the end storage device.
Depending on the migration route that is to be adopted by a given host, you may also need to consider other components, such as storage virtualisation engines, from both a current- and future-state perspective.
Having analysed the current and future states together with application dependencies and working within the remit of the target infrastructures capabilities, you can align migration strategies against each component.
Those migration strategies will need to take into account how much downtime (if any) is acceptable to a given application, the volume of data that a particular technique can move within a given window and the capability of a host, for instance, to work within a required strategy.
It is therefore important for you to carry out a testing phase against each platform and migration strategy and to document your findings. You might need to upgrade components of the current environment to enable certain capabilities. Migration strategies might include file (rsync, Robocopy, backup/restore), block level (Linux LVMs, virtualisation engines), merging or routing between fabrics or physical-to-virtual migration with enhancements such as snapshots. Each of these will have their own limitations and/or constraints.
Typically, you’ll have to complete the SAN migrations within tight time constraints, so it’s a good idea to develop standardised processes and run books to enable repeatability and stability.
To avoid delays, you should develop “wheels” of activity that enable a rolling programme of, for example, planning, preparation, installation, upgrade, provisioning and migration execution for different elements of the overall project. These activities can run in parallel with days or weeks between the start of each one.
For example, if Week 1 is spent defining the hosts (or applications) that will be migrated and Week 2 involves defining or understanding prerequisites, then on Week 2 a new Week 1 definition stage for a different part of the project should also start.
Having carried out all the above steps, the actual migration becomes repetitive-- whether you’re migrating one or hundreds of devices at a time. A well-planned migration eliminates pain and minimises downtime in a consistent and repeatable manner.
This was first published in August 2011