The dilemma over what to do with an inherited collection of backups is a big contributor to the aftermath of chaos. If a large firm takes its time and brings smaller firms into the fold with proper planning and a strategic standard, this process can be relatively pain-free with few loose ends. This requires a significant amount of administrative leg work in advance and a communication link to exist between the business and their supporting technical departments.
However, once grown to incredible size most major firms are not interested in anything slow and steady. 'Growth' and 'market share' become buzzwords that drive businesses to the cosmic rate of expansion that shareholders clamour for. Little time is allotted between eating up smaller firms, leaving IT departments with three major strategies to choose from when facing a rapid infrastructure expanse.
1. Dump historical data and start afresh (not recommended by lawyers or regulatory commissions)
2. Connect now and attempt future standardisation
3. Connect and leave until the sun explodes
Within a single small company backup managers already face a number of unique backup clients and irregularities that keep them up at night. When that small company is put together with a hundred more, each using a different backup application, regulatory standard and infrastructure design, those irregularities pile up. The last strategy from above is the easiest solution to execute, but in the end produces far more long term headaches.
Strategy three is employed by some very large, notable firms, and is driven by the potential costs of consolidation. This mantra discourages company standards for technology, thus creating a company of cobbled together islands. Short term costs to implement this are merely what it takes to connect new infrastructure to the existing network. Staff and infrastructure generally are left alone to administer their environment as if nothing happened. Long term costs for this type of strategy can be astronomical. Such firms are saddled with aging technologies left in place for what can be decades, infrastructure topologies that are varied and staff that are decentralised with skill sets that do not overlap. Furthermore, refreshing that old technology is left to be done on an individual basis, which handicaps the ability to secure optimal large purchased vendor deals.
More and more companies are realising that leaving things be doesn't work. Hiring operations staff to manage Legato, NetBackup, TSM, Backup Exec, CommVault, etc. is gruelling, and refreshing technologies from 40 different vendors is arduous for two data centres let alone dozens.
With that said, consolidation is not without its costly deterrents. Migrating backup data from one application base to another, contractor hires, and the hours committed to developing a strategy, are all rather expensive. Despite the costs, having a standard plan to move forward with, a centralised team to handle that plan, and a small group of lucky vendors selling you kit is a lot less costly than the embarrassment faced when an auditor finds a working rotary phone in one of your server racks.
About the author: Brian Sakovitch is a senior consultant at Glasshouse Technologies (UK), a global provider of IT infrastructure services. Brian has followed a 6 year path in backup technologies ranging from hands-on installation and implementation, to design and theory. Three of those years have been with GlassHouse US focusing on a number of predominantly backup related engagements for companies of all shapes and sizes.