We need insurance against data bias, particularly (obviously) in the world of Machine Learning (ML) as it feeds out Artificial Intelligence (AI) systems.
Errors relating to ML data bias occur when some elements of a dataset get more heavily weighted and so more strongly represented than others… a situation that drives skewed outcomes, typified by low accuracy levels in the AI model being developed.
Aiming to address this fight is Synthesized, a DataOps platform that has offered insurance businesses a way to discover unhelpful bias within their data which, if mitigated, could make quotes, claims and premiums fairer.
The AI-based UK startup has unveiled FairLens, data-centric open source software for identifying and measuring data bias.
Denis Borovikov, co-founder and chief technology officer at Synthesized says that many data science models rely on biased and skewed datasets.
“What we have created, with FairLens, is a mathematical framework to discover and visualise data bias. We hope FairLens will enable data practitioners to gain a deeper understanding of their data, and to help ensure fair and ethical use of data in analysis and data science tasks,” said Borovikov.
While data bias is still a taboo subject for many companies and industries, what FairLens seeks to enable is the behind-the-scenes discovery of data bias, which can then be mitigated.
Many insurance apps, for instance for automobile, health or life insurance make a decision without human involvement, based on a company’s data. With limited, poor-quality or skewed datasets, data-driven applications often fail to achieve their intended purpose as they are inherently biased.
Synthesized thinks that understanding the hidden biases in data will help calibrate data science models to ensure fairer outcomes and access to previously underserved and underrepresented customers.
It would also potentially dramatically reduce the risk of non-compliance with regulations and help protect brand reputation.
With FairLens, data scientists can:
- Measure bias
- Identify sensitive attributes
- Visualise bias
- Score fairness