Big data refers to a mass of information held digitally that is so large that it is difficult to analyse, search and process.
Insurers already hold vast amounts of data, but now they can gather even more from new sources such as GPS-enabled devices, social media postings and CCTV footage.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The key to unlocking this is through data analytics, the process of examining large amounts of data of a variety of types to uncover hidden patterns, unknown correlations and other useful information.
Such information can provide competitive advantage and result in business benefits, such as more effective marketing and increased revenue.
Premiums can be better correlated to risks, something particularly pertinent now given the impending arrival of Solvency II. If risk-based capital can be calculated more accurately, this influences the minimum amount of capital that needs to be held.
More on data analytics in financial services
To find out how switched on are insurers when it comes to analysing data on a substantial scale, Ordnance Survey and the Chartered Insurance Institute worked jointly on a report and survey: The Big Data Rush: How Data Analytics Can Yield Underwriting Gold.
Some 242 underwriters were questioned about their data challenges and 220 members of the Chartered Institute of Loss Adjusters were asked about the data they collect and how underwriters could better exploit this.
Overall, the results showed the insurance industry is already well aware of the transformative power of big data – and also of the challenges it faces in getting the most out of it.
Big data rush findings
- 82% of respondents believe those insurers that do not capture the potential of big data will become uncompetitive.
- 96% of respondents say the digitally enabled world will see the emergence of new risk rating factors.
- Motor (88%), household (76%) and health (60%) are the insurance lines where pricing accuracy could be transformed by big data pricing models.
- Nine out of 10 respondents said access to real-time claims data would help price risk more accurately.
- 68% said real-time location-based data could revolutionise understanding of cumulative risk exposure in motor.
- 86% agree that the key to making best predictive use of big data is to be able to analyse data from all sources together rather than separately.
- 88% agree that linking information by location is key to usefully combining disparate sources of big data.
Accessing the right data
By aggregating the right data from existing and new sources, underwriters can build a much more detailed picture of risk, both on an individual and trend-led basis.
The key words are the ‘right data’, and while they may hold a lot, insurers are also likely to use other data which is available publicly, such as the electoral roll, or they can purchase it, by accessing credit checks for example.
Insurers also need bespoke data, but may find problems in extracting this directly from customers. It could be that customers are reluctant to provide some information, and there may then be problems using any supplied data with existing online models.
So, can the insurance companies make it worth the customer’s while to provide that information – something retailers have perfected with loyalty cards? Interestingly, insurers could hold a trump card here, given the rise of telematics and demand for affordable premiums.
Tapping into telematics
Far from being reluctant to share location-based driving data, drivers may be increasingly open to this as it makes cover more affordable.
Telematics offers the potential to earn lower premiums by monitoring driving behaviour and will help insurers better understand risk exposure by providing details not only of the miles travelled, but also the roads used (the road classification) journey time and driving behaviours. By adding road layouts, an accident can be recreated and blackspot models.
New ways of working
Location intelligence products are a major advance in managing big data.
These are built from layers of data, including addresses, functional sites, building footprints and supplemented with the users' own data. Almost all events and transactions yield some location data, be it an address or place name (implicit location data) or GPS coordinates or elevations (explicit location data).
Merging the two to create a holistic view can be powerful, lending structure to shapeless raw data and delivering new insights. Some insurers are already using this approach to power up their underwriting function and to boost their modelling capabilities.
Insurers which are serious about working with big data will also look to boost their expertise in areas such as:
- Data clustering – automated grouping of similar data points can provide new insights into apparently familiar situations.
- Sentiment analysis – textual keyword analysis can help analyse the mood of twitter chatter on a given topic or brand.
- Web crawling – sophisticated programmes that can identify an individual's “web footprint” as a result of posting on social media websites, blogs and photo sharing services. Using data matching, this can be linked to public records and data from other third parties to build a multi-dimensional profile of an individual.
In-house talent exists
Insurers are likely to need to work together with experts on big data. Despite this, it is worth remembering that the insurance industry’s profitability is rooted in its ability to analyse data effectively to accurately underwrite and price risk.
And insurers often already employ gifted people in this area, such as actuarial and marketing, in addition to underwriting. There is talk of a new role – a so-called data scientist.
Barriers to success
- 95% of respondents agreed that many underwriting departments lack the necessary tools.
- 81% agreed that many underwriting departments lack the specialist skills.
They will have an academic background in areas such as computer science, modelling and statistics, but beyond this will have business acumen and the ability to spot trends in data from multiple sources.
However, there is debate on whether insurers actually need this new role to succeed, as chances are the skills already exist within insurers – it is just a matter of getting the organisation’s creative thinkers to work closer with its data analysts.
More potential to work with loss adjusters
Loss adjusters’ ability to feed underwriters data from the site of the claim could be valuable for insurers looking to make underwriting and pricing decisions in real time.
- 52% of respondents use data from loss adjusters to support underwriting decisions.
- 84% of loss adjusters agree insurers currently underutilise the information gathered by loss adjusters.
- 87% believe that breakthroughs in predictive analytics mean that insurers will demand more detailed and frequently updated data.
Challenges of big data
Although the potential benefits are enormous, maximising the potential of big data will require investment – and in the current economy, insurers are likely to be constrained, as are so many other sectors.
This can stymie efforts to gain an enterprise-wide view of a business, which can be so valuable. And few need reminding that many insurers which have been through acquisitions have, even years later, failed as yet to fully achieve this.
There is a further barrier in that as more insurers are handling business online, they have simplified processes, which is likely to reduce risk visibility and regulatory compliance.
Big data has arrived and it presents a previously unavailable opportunity for insurers to find new insights and to improve their business processes. The data rush is on and the first movers to turn big data into gold will gain the most competitive advantage.