The emerging field of master data management deals with the issue of how to handle data that needs to be shared between different computer systems.
It has become important because, five years after the big rationalisation of IT systems in the late 1990s, most large companies still have dozens, and in some cases hundreds of different IT systems.
The management of what is becoming known as master data - the core definitions of elements such as products, channels or customers, which all these systems gather information about -is a giant headache.
According to a survey by research firm Tower Group, most companies maintain master data separately in at least 11 or more source systems. In reality, the picture is even more complex, since large companies rarely have just one instance of an enterprise resource planning or customer relationship management system. It is not unusual for global companies to have 50 or more separate ERP instances, each with a slightly different implementation from the next.
How then can managers, tasked with tracking company performance by taking information from multiple IT systems, get the answers they need when there are so many versions of what they are measuring? How do the codes for "customer" and "product" get managed across this panoply of systems? The short answer is: with difficulty. Most organisations are slaves rather than masters of their corporate data.
Over time local subsidiaries of global companies create many local variations of products to suit local markets. However, standardisation to one universal set of master data is impractical, as the costs of modifying the codes, associated information, packaging and brand data embedded in dozens of ERP and other operational systems would be huge.
For companies wishing to execute master data management projects, there are two major stages. The first is understanding the problem; the idea of mapping all the information against a set of criteria to see whether differences were really needed, identifying overlaps, and then coming out with a new set of products. The second is using these insights to make operational changes to the business. However, it is important to understand that this is not just mapping product codes.
This may seem simple enough but, after a decade of vast global projects (for example, one large company spent £83m rolling out SAP globally) virtually every large company has a wide range of established applications. Each of these applications has its own master data definition, even though the software is coming from fewer suppliers.
Large organisations frequently have hundreds of slightly different implementations of the same software deployed throughout their global operations.
What is needed are applications that do not try to fit the world around a simplified version of reality, but are based on the assumption that business models are complex and can probably never be standardised due to the different needs of different markets and customers. Instead, technologies are required that manage the complexity of real-world business models and expect there to be 11 or 57 different definitions of core master data.
Datawarehouses are a useful step forward provided they can handle multiple, concurrent, linked business definitions. However, they are not sufficient to solve the overall problem since an application is then needed to deal with the workflow to support the business "policy hub" - an application that can deal with version control, authorisation and security as well as analysis of the company's master data.
Master data management systems are needed that allow customers to understand and analyse their multiple business definitions across the various source systems, propose changes to this master data, and then publish new versions of product catalogues and customer segmentation. Understanding this business process and realising the elaborate workflow process required to do this still seems to elude most IT departments.
Andy Hayler is founder and chief strategist at software supplier Kalido