Data migration is basically data movement; taking data from one folder, partition, disk or disk subsystem and then placing that data into another physical location. In many cases, data is migrated to accommodate tiered storage -- often part of a data classification initiative. For example, unneeded data located on high-performance Tier 1 Fibre Channel drives can be moved (or migrated) to a nearline SATA disk array. Later on, that data can be migrated to a fixed-content archive system, virtual tape library (VTL) or tape library. In other cases, data migration is used to move data from an old storage system to a new platform. However, there is more to data migration than simply issuing a "Move" or "Copy" command from the operating system. In actual practice, migrating hundreds of gigabytes, or even terabytes, requires a highly efficient tool with the intelligence to recognize data types and follow policies that can aid in automation.
@38815 There are many tools to choose from, and the actual choice depends upon a careful evaluation of transparency, interoperability with storage systems and data, complexity, policy enforcement and other factors. Now that you've reviewed the essential issues involved in any tiered storage acquisition, this segment will first focus on specific considerations for data migration tools. After that, you'll also find a series of specifications to help make on-the-spot product comparisons between vendors, including Brocade Communications Systems Inc., CA, EMC Corp., Hewlett-Packard Co. (HP), Incipient Inc., Network Appliance Inc. (NetApp), Symantec Corp.
Determine how the data migration tool behaves. Data migration tools can differ radically in their operation. For example, some tools will copy data from the source to the destination and then mirror writes to both locations until a final "cutover" occurs. This type of behavior may be preferable if the migration will take place in phases or is intended to support the addition of a new or upgraded storage system. Other tools will simply move the data to the target and then delete that data from the source. This accomplishes data migration without using additional storage, but makes it more difficult to "undo" the migration if problems arise. Lab testing can help to identify any idiosyncrasies before the purchase.
Consider how migration impacts storage performance. Data migration will invariably have an impact on the performance of your storage subsystems or storage fabric. This, in turn, may have an unwanted influence on storage service levels or application availability. When considering a migration tool, make the effort to measure storage performance with and without the migration process running. Performance degradation will probably appear worse when moving a substantial amount of data (e.g., bringing a new or upgraded storage system online). Data that is migrated on an ongoing basis, such as daily or weekly moves, typically has a smaller overall impact.
Consider where data migration will be handled. Data migration tools can be host-based, network-based or storage-based. Host-based data migration tools typically run on a dedicated server. Experts note that host-based data migration tools are typically storage agnostic and can migrate data in the background. This allows for a greater range of storage system choices in the future. Network-based data migration is generally a feature of intelligent switches, making migration a function of the storage fabric itself. This is often a more powerful approach, but potentially limits interoperability between switches and storage systems. Storage-based data migration occurs within the storage subsystem itself, such as a Hitachi Data Systems' (HDS) TagmaStore. Storage-based data migration is generally the most subtle but least interoperable approach.
Evaluate file- and block-level tools. Data migration tools can operate at the file and block levels. Experts suggest the use of block-level data migration tools, but file-level data migration tools should not be ignored, especially in situations where storage is over allocated to servers. For example, a Windows server may use 300 GB of a 500 GB volume, but a block-level data migration tool sees the 500 GB volume and can only move files to another volume of equal or larger size. This can perpetuate inefficient storage provisioning. By comparison, a file-level data migration tool can move the 300 GB of files to a volume of that size or larger.
Weigh the automation features. Data migration can often be performed manually, but it is not intended to be a manual process, especially if data must be moved on an ongoing basis. Automation is particularly important to keep labor/administration costs to a minimum. When considering a data migration tool, be sure to examine the automation features closely. For example, Brocade's Data Migration Manager (DMM) can automatically create migration pairs, implement automatic zoning and can use a scheduler feature to perform tasks at preset times. CA's File System Manager includes a policy simulation feature that allows administrators to understand the effectiveness of policies before actually moving data. Incipient's Network Storage Platform (NSP) provides automated provisioning.
The data migration product specifications page in this chapter covers the following products:
- Brocade Communications Systems Inc.; Brocade Data Migration Manager (DMM)
- CA; File System Manager
- CA; Unicenter Desktop DNA
- CA; XOsoft WANSync & WANSyncHA
- EMC Corp.; Invista
- EMC Corp.; Open Migrator/LM
- EMC Corp.; Rainfinity
- EMC Corp.; SAN Copy
- EMC Corp.; SRDF (Symmetrix Remote Data Facility) Family
- Exeros; DataMapper
- GoldenGate Software Inc.; Transactional Data Management (TDM)
- HP; ILM Tiered Storage software
- Incipient Inc.; Network Storage Platform (NSP)
- Network Appliance; V-Series appliance
- ScriptLogic Corp.; SecureCopy 4.11
- Symantec Corp.; Veritas Storage Foundation software
- Symantec Corp.; Veritas Volume Replicator