A survey of more than 500 C-level executives conducted by McKinsey in March 2017 reported that an increasing number of companies are using data and analytics to generate growth, but data monetisation, as a means of such growth, is still in its early days.
The McKinsey global survey of data and analytics 2017 reported that nearly half of all respondents said data and analytics significantly or fundamentally changed business practices in their sales and marketing functions, and more than one-third said the same about research and development (R&D).
According to McKinsey, executives in high-performing companies were more likely than others to already be monetising data and to report that they are doing so in more ways, including adding new services to existing offerings, developing entirely new business models, and partnering with other companies in related industries to create pools of shared data.
“The high performers’ focus on data monetisation may stem from a better ability, and a greater need, to adapt to change. Compared with their peers, high-performing respondents report that data and analytics activities are prompting more significant changes in their core business functions,” wrote Josh Gottlieb, a specialist in McKinsey’s Atlanta office, and Khaled Rifai, a McKinsey partner, in the Fueling growth through data monetization report, which was based on the survey.
“Data is the new currency for organisations,” says Cynthia Stoddard, Adobe CIO, who is responsible for IT and cloud operations. The software company has faced several challenges relating to data management, including the inability to make decisions quickly and difficulties in accessing data. “We needed a data-driven model to move from a reactive organisation to a proactive business,” she says.
For Stoddard, the main barrier the company faced was to ensure there was commonality in its data. “Different parts of the organisation had come up with different data definitions,” she says. This led to inconsistencies, preventing Adobe from being able to use its data to achieve actionable insights.
To tackle her data governance issues, Stoddard says Adobe decided to concentrate on understanding personas, looking across the customer journey at how different people in Adobe interact with the customer. “The customer is not static, so we looked at the customer journey,” she says. “There will be different touchpoints in the organisation, which means the view of the customer changes over time.”
The company has developed a Hadoop cluster and analytics to provide the flexibility to adapt as the customer progresses. “We can look at the metrics relevant to our users,” says Stoddard.
Adobe began its data analytics journey in 2017, developing a customer framework using a Hadoop base to consolidate multiple databases into a single technology stack. The initial build took six months. “We keep building different metrics on top of this,” she says.
Adobe adopted an agile approach to delivering new functionality every month. It established a data governance group, which steers the development of the project to make sure it remains in line with business needs, and provides a feedback loop, through impact assessment, to ensure the analytics remains useful to the business. “We look at the impact across different business domains and adjust what we do if the business impact is not high enough,” says Stoddard.
Read more about big data management
Organisations have an ever-increasing amount of data at their disposal. In this 13-page buyer’s guide, Computer Weekly looks at what the future holds for data management, how the Met Office is opening up large .
Data analytics technology won’t deliver business value by itself – it needs to be deployed, and its users organised to deliver value.
To date, Stoddard says, Adobe has been able to reduce the time needed to produce new metrics for users from weeks to seconds.
Making sense of data.
Making sense of data
As Computer Weekly has previously reported, utility company Centrica set its sights on developing and delivering technology-based products to residential and business customers. It created a subsidiary, Io-Tahoe, to develop tools to solve the data management challenge of linking multiple diverse legacy systems to provide analytics on its 14 million customer accounts. Last year, it launched a £100m innovation fund and made its first investment in the shape of data discovery firm Rokitt Astra. Io-Tahoe is now commercialising its tool.
Explaining the company’s challenge, Io-Tahoe CEO Oksana Sokolovsky says: “Most businesses are data-driven. Data can help drive revenue. It is an asset, and companies are looking to monetise this.”
Financial services, banking and telecommunications, among others, need a lot of data. Sokolovsky, who previously headed up transformation services in the global technology and operations group at Deutsche Bank, believes businesses have a challenge in keeping control of their data to understand what really is happening across the business. She says organic and inorganic growth generates more data. “Large enterprises acquire a lot of different databases and are now building data lakes. There are copies of copies of data.”
This is a fundamental technology challenge, says Sokolovsky. “For instance, there are different versions of the same customer,” he says. “Even within the same bank, you will have different divisions of each, which will have customer information in multiple databases.”
Data can be incomplete or wrong. In one organisation, there may be a lot of redundant data, or the database may merge three columns into one, or the field is completely blank, she adds.
The taxonomy in these databases may differ, so one division may identify its customer via customer ID field, while another uses the account number. “It is a challenge to determine that both are talking about the same customer,” she says. And the data itself may be interpreted differently. “Washington may be a street, a city, a state or a person’s name,” she says. “Businesses need to determine where they store the data in order to understand fundamental business relationships.” And this is exactly what Io-Tahoe’s tools aim to do, by giving businesses a way to look at how data moves through the enterprise. Io-Tahoe uses machine-learning algorithms to go through each data element in a dataset and determine the specific relationship.
The goal for many organisations is to have one version of data that they can trust, such as a single customer record that remains consistent across every touchpoint in the business.
Businesses often struggle to maintain data. As Io-Tahoe’s Sokolovsky points out, different parts of the business can often use different terminology for the same customer record, provide incomplete data or even introduce errors.
To tackle this, Adobe established a data governance group to ensure that applications adhere to the company’s data standard, that the correct information is captured and that the data is maintained consistently. People also need to adhere to the data standards when filling in forms, yet people on the front line may not see the value in spending a lot of time entering information correctly. The challenge for data management is to convince the organisation that everyone in the business can benefit from consistent, clean data.
Case Study: How Admiral made its Teradata data warehouse more agile
Admiral Group began using Teradata a few years ago because it was finding it difficult to update its old mainframe system to add new insurance products.
Explaining the company’s data warehouse plans, James Gardiner, data warehouse technical lead at Admiral, says: “We wanted to use the US GuideWire software to spin up new products. But all data was coming from the mainframe accessed via SAS or Excel.
However, Gardiner admits that the Teradata project was slow and painful. “The project was run tactically to ensure the new system went in on time,” he says. “We ran a waterfall methodology and hand-coded Teradata. We had a complicated extract, translate and load (ETL) process to move data from our source systems into Teradata and the documentation was out of date.”
Following the implementation, the company wanted to try to reimplement the data warehouse project. “We found the traditional method of implementation was not aligned to the business,” says Gardiner.
Because Admiral’s database team were not Teradata experts, Admiral needed Teradata consultants to write custom lines of code by hand for each business request. The company wanted a more agile approach to updating its Teradata data warehouse to enable it to turn around business requirements quickly, says Gardiner.
Admiral began looking at how it could automate the code generation for the Teradata data warehouse, he adds. “We did a proof of concept with WhereScape, which allowed us to become agile, so we could change our methodology.”
By using WhereScape, according to Gardiner, the data warehouse team can now work with the business on rapidly prototyping new ideas, which can then be further developed into products. “We can speed up development by six to eight times and be more flexible with the business,” he says.
“Essentially, we can load data a lot quicker. Admiral will try a lot of different products and spin up trials quickly to see if they bring in new customers.”
To support this, Gardiner says WhereScape allows Admiral to build warehouse components very quickly. Previously, this would have taken months.
The team supporting Teradata is also a lot smaller. “The new WhereScape project has 20 people on the team,” he says. “The previous tactical project had 60 to 80 people.”
Given that Admiral’s data team is mainly trained in SQL Server, Gardiner says: “We can take someone in SQL Server and get this building on Teradata, without lots of hand-holding.”