Laurent - stock.adobe.com

BCS: more diverse workforce key to UK AI ethics global leadership

BCS report says the UK can take an international lead in AI ethics if it cultivates a more diverse workforce, including people from non-STEM backgrounds

The BCS has published a report advocating that the UK should take a global lead on the application of ethics to artificial intelligence (AI).

It said that the UK “can lead the world in creating AI that cares about humanity – provided more people from non-tech backgrounds choose the field”.

It added that a more diverse range of people need to be in AI-related jobs if public trust in the technology is to grow, and that government and the IT industry need to join forces to make sure that happens.

The policy discussion document Priorities for the national AI strategy is intended to complement the UK government’s National AI Strategy, which is due to be released later this year, a BCS spokesperson said. The Department for Digital, Culture, Media and Sport (DCMS) and the government’s Office for AI are aware of the contents of the BCS document.

The report was written by Bill Mitchell, the BCS’s director of policy. It is partly based on the organisation’s canvassing of views of “groups of professionals across the BCS through our boards”, said the spokesperson.

Mitchell said: “The UK should set the ‘gold standard’ for professional and ethical AI, as a critical part of our economic recovery.

“We all deserve to have understanding and confidence in AI, as it affects our lives over the coming years; to get there, the profession should be known as a go-to place for people from a diverse range of backgrounds, who reflect the needs of everyone they are engineering software for.

“That might be credit-scoring apps, cancer diagnoses based on training data, or software that decides if you get a job interview or not. It’s about developing a highly skilled, ethical and diverse workforce – and a political class – that understands AI well enough to deliver the right solutions for society.”

Bill Mitchell

 “The UK should set the ‘gold standard’ for professional and ethical AI, as a critical part of our economic recovery”

Bill Mitchell, BCS

The document called for more AI education in schools and for opportunities for adults to re-skill if a digital divide is to be overcome. It argued that while there are high quality apprenticeships and qualifications available – such as AI data specialist and data analyst apprenticeships, as well as a new T-level in digital business services – “they don’t sufficiently support those in the workforce already who do not have a STEM [science, technology, engineering and maths] background and want to change to AI-related roles, and who may have significant family or caring responsibilities”.

The report continued: “Strong leadership from government is needed to ensure the various existing initiatives…have suitable AI-related qualifications and training added to them to provide coherent pathways out of the digital divide.”

The report described a “wide variation in the level of competence and ethical practice” of organisations using AI, and says the government should develop new professional standards in AI across the public and private sectors.

It cited a YouGov report that found that 62% of UK adults believe “someone who develops computer software that can significantly affect people’s lives should be qualified as a government-approved Chartered professional”.

In July 2021, the BCS, the Royal Statistical Society and the Alan Turing Institute declared themselves among the bodies that have combined to set up a data science “alliance” to establish professional and ethical standards across the profession.

In the report, the BCS authors cited the Civil Service’s “development of a body of professional practice, including ethical guidelines, for developing and adopting automated information systems, including AI, within public services” as showing a path forward to “significantly build public trust”.

Examples of falling trust cited in the report include last summer’s exam crisis, sparked when an algorithm was used to estimate grades, BCS said. Its follow-up survey found that 53% of UK adults had no faith in any organisation to use algorithms to make judgements about them.

The report also urges government and industry to work together to ensure AI helps reach the goal of net-zero emissions. It cited a study by The National Engineering Policy Centre, of which the BCS is a member, into engineering-based solutions to support the achieving of net-zero carbon emissions in line with the COP26 priorities.

Among other things, that study highlighted a “need for trusted data-sharing frameworks between organisations from all sectors to maximise finding opportunities for decarbonisation”.

Read more about AI strategy and public policy

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close