kirill_makarov - stock.adobe.com

Should we be worried that half of UK organisations don’t have a policy for the safe use of AI?

As artificial intelligence becomes more mainstream, business leaders must be aware of the risks and make sure their firms do not build bias into AI algorithms

Whether it’s in our homes, offices, or even pockets, many of us are now taking advantage of artificial intelligence (AI) technologies on a daily, if not hourly, basis.

As the technology’s development continues to advance at a rapid pace, AI is acting as the powerhouse of emerging technologies, opening up opportunities for businesses that would have seemed unimaginable just a decade before.

According to Deloitte’s latest Digital Disruption Index, four in five of the UK’s most influential businesses and public sector organisations will have invested in AI by the end of next year. However, despite the rising adoption, less than half of the organisations  surveyed have a policy for its safe and ethical development.

The safety and security of AI has fallen under significant scrutiny in recent months, especially as AI and automation is now reaching critical areas such as healthcare, autonomous vehicles, security and justice.

The very real risk that AI systems may reproduce and amplify unconscious bias is of particular concern. We’ve seen AI solutions used in recruitment unfairly discriminating against candidates with certain backgrounds, and AI models being used to automate mortgage assessments exhibiting bias against residents of certain neighbourhoods.

However, as the use of AI begins to mature, the industry is swiftly learning and adapting. For instance, as some UK police constabularies begin to use AI for custodial decisions, developers are working to safeguard against the past mistakes of previous systems in other countries which, when rolled out, quickly developed bias against certain physical profiles.

The saying goes that “a bad workman blames his tools” and, in AI’s development case, the saying rings true. To address this, particular emphasis should be placed on assuring the data used to train AI is fit for purpose from both an ethical and operational standpoint.

Old behaviours

Historical data sets that models are trained on risk hard-coding old behaviours, possibly by being inherently biased in relation to race, age and gender. Programmers must therefore remove this bias and retrain models as necessary, and tools and technologies to help detect and remove bias in machine learning models are already being developed and deployed.

When rolling out AI, organisations must also ensure the methods are appropriate. For lower-risk applications, such as music streaming sites offering song suggestions, allowing machine-learning systems to continuously adapt to a user’s preferences without supervision may be the best approach.

For high-risk applications, however, such as clinical triage or judicial decisions, robust validation, control mechanisms and regulatory frameworks are vital. Once deployed, organisations must carefully monitor AI performance and fine-tune systems in these cases, constantly monitoring any recommendations or decisions made.

Not every circumstance, however, will call for a specific policy to be constructed to govern its use. In most cases, the organisation’s existing ethics policy and procedures should be sufficient (or made sufficient). In all cases, an organisation’s values should inform how and why AI algorithms make their decisions, just as those values are used to inform any other business decision.

Who is accountable?

If something is unethical in the real world, it will be unethical in the AI world. Currently, there is a concerning lack of understanding among business leaders around AI technology and the impact it will have on their organisations.

With AI now taking a significant role in business strategies, it’s vital that leaders take the time to not only understand the technology but, more importantly, to comprehend the harm it has the potential to cause.

For developers, considering the unique risks which can come from AI in every use case, and the regulatory requirements that are necessary, will be key to safeguarding every application.

Many people call for AI to be accountable, but in reality it is just like any other technology, AI has no conscience and cannot be sanctioned. We as human decision-makers and programmers are entirely responsible for monitoring its outputs and managing its risks.

Challenges of bias and performance cannot only be resolved through technical and regulatory means, but also, and arguably more effectively, by fixing the endemic lack of diversity in the industry, especially in terms of gender and ethnic background, as well in mindset and experience.

Matthew Howard is a director of artificial intelligence at Deloitte, leading AI strategy, design and delivery projects.

Read more about ethics and AI

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close