Me studio - stock.adobe.com

How algorithmic automation could manage workers ethically

Managing workers by algorithm and automated process has generated ethical problems aplenty. Can such means be pressed into the service for a more ethical mode of worker management? We find out

This article can also be found in the Premium Editorial Download: Computer Weekly: Using technology to reinvent shopping

Management by humans can be dismal. “In the old world of cabbing, the drivers were often abused,” says James Farrar, director of non-profit organisation Worker Info Exchange (WIE). Drivers would pay the same fee to drive for a taxi company, but receive differing amounts of business.

“You’d have so-called ‘fed’ drivers [fed with work] and ‘starved’ drivers, with favoured drivers getting all the nice work,” he says, with some dispatchers who allocated work demanding bribes. As a result, many welcomed dispatchers being replaced by algorithms: Farrar recalls cheering this in a session for new Uber drivers.

But management by algorithm and automated process has introduced new problems. Last December, WIE, which supports workers in obtaining their data, published its report Managed by bots. This includes platforms suspending self-employed workers based on facial recognition software wrongly deciding that they are letting other people use their accounts, then refusing to allow a human review of the suspension.

Facial recognition software tends to be less accurate for people with darker skin and the WIE report, noting that 94% of private hire vehicle drivers registered with Transport for London are from ethnic minority backgrounds, says this “has proven disastrous for vulnerable workers already in precarious employment”.

Farrar says there are broader problems, such as platforms taking on too many drivers, which reduces waiting times but makes it very hard to make a living through such systems, as well as congesting the streets. “Because these companies have behaved this way, they have almost become an impediment to realising the vision they set out,” he says. “I’m a technology optimist. It can deliver great things for people, workers and businesses. But we have to hold people accountable for how they use it.”

Farrar says employers should be transparent about their technology use, particularly over work allocation and performance management; should not use security and fraud prevention as excuses to hide what they are doing; and should not use automation on its own for life-changing decisions.

Role of unions

The Trades Union Congress, an association of 48 unions, made similar calls in Dignity at work and the AI revolution, a manifesto published in March 2021. Employment rights policy officer Mary Towers says unions can play a new role in handling and analysing the data that employers hold on their members, such as on pay. “I think without that kind of collective assistance, it would be very difficult for an individual worker to take control of their own data without pooling it,” she says.

A set of data could be used for analysis and as the basis for action such as an equal pay claim. A union could formally act as its members’ representative under data protection law or it could ask members to collect data independently, such as through WeClock, an app designed by international union federation Uni that enables users to log how long they spend working and commuting.

The ways in which automation and artificial intelligence (AI) are used with workers’ data can also be included in negotiations between unions and employers. A 2020 update of the collective agreement between Royal Mail Group (RMG) and the Communication Workers Union (CWU) includes a section on technology that states that “technology will not be used to de-humanise the workplace or operational decision-making” and that “the use of technology is designed to support more informed discussions between RMG and CWU and not replace them in any shape or form”.

Towers says that employers wanting to use technology well in workplace management should aim for “a collaborative, social partnership approach”. She adds that staff are often not aware of what employers are doing, which can be addressed by publishing an easily accessible register of what technologies are in use and offering workers access to their own data automatically, rather than requiring a subject access request.

Transparency over automation and AI also makes sense from a legal viewpoint, according to Sally Mewies, partner and head of technology and digital at Leeds-based commercial law firm Walker Morris. “It’s not possible, often, for humans to understand how decisions are made,” she says. “That’s the big concern when you apply it to staffing and human resources.”

This can raise employment law issues, while the EU’s General Data Protection Regulation, enacted by the UK in 2018, bans individuals being subjected to entirely automated decisions unless certain conditions are met. The UK government suggested abolishing this in a September 2021 consultation, which also proposed allowing the use of personal data to monitor and detect bias in AI systems. These measures have yet to be formally proposed in a bill.

“You have to satisfy yourself that where you were using algorithms and artificial intelligence in that way, there was going to be no adverse impact on individuals”
Sally Mewies, Walker Morris

Mewies says bias in automated systems generates significant risks for employers that use them to select people for jobs or promotion, because it may contravene anti-discrimination law. For projects involving systemic or potentially harmful processing of personal data, organisations have to carry out a privacy impact assessment, she says. “You have to satisfy yourself that where you were using algorithms and artificial intelligence in that way, there was going to be no adverse impact on individuals.”

But even when not required, undertaking a privacy impact assessment is a good idea, says Mewies, adding: “If there was any follow-up criticism of how a technology had been deployed, you would have some evidence that you had taken steps to ensure transparency and fairness.”

There are other ways that employers can lessen the likelihood of bias in automated workforce processes. Antony Heljula, innovation director at Chesterfield-based data science consultancy Peak Indicators, says data models can exclude sensitive attributes such as race, but this is far from foolproof, as Amazon showed a few years ago when it built an AI CV-rating system trained on a decade of applications, to find that it discriminated against women.

As this suggests, human as well as automated decisions can be biased, so it can make sense to build a second model that deliberately uses sensitive attributes to look for bias in those decisions, says Heljula: “Call it anomaly detection.”

Other options include: establishing an ethics committee to validate uses of AI; preferring relatively explicable AI models such as decision trees over others such as neural networks; and basing workforce planning on summarised data on groups of people rather than individuals. On the last, however, groups need to be sufficiently large – a prediction that all the women in a team are likely to leave becomes rather personal if only one woman works in that team.  

Heljula thinks concerns over bias and surveillance should force a rethink on how AI is used in human resources. “We have to shift away from ‘Big Brother’ monitoring to things that employees and contractors would welcome,” he says, such as using technology to check for bias in decisions or to assess employee skills in order to develop customised training plans.

AI can also be used for natural language-based services to answer workforce queries such as ‘what is the average salary in my team?’, he says. “It’s not monitoring what you’re doing, it’s helping you do your job more effectively.”

Infosys bids to combat bias in AI systems

India-headquartered IT consultancy Infosys has developed a five-step approach to tackling bias in AI. It looks for sensitive attributes in data; sets “fairness measures” such as a target for the percentage of women in a particular role; implements an AI-based system; makes its results explainable, such as saying which data was used to reject someone for a job; and builds in human governance of the outcomes. “It’s essentially a sanity check,” says David Semach, Infosys Consulting's head of AI and automation in Europe, of the human input. “It’s absolutely critical.”

Semach says Infosys is in the process of implementing such anti-bias functionality with a large consumer goods group that uses algorithms to screen tens of thousands of CVs. The company has set 30-40 fairness measures, which Semach says is about the right number, although he adds that “one of the biggest challenges is to define the measures” because the company didn’t generally have these in place already.

Israel-based data analytics software provider Nice has published a “robo-ethical framework” for its robotics process automation (RPA) users. This says robots must be designed for positive impact, to disregard group identities and to minimise the risk of individual harm. Their data sources should be verified, from known and trusted sources, and they must be designed with governance and control in mind, such as by limiting, monitoring and authenticating access and editing.

Oded Karev, Nice’s general manager for RPA, says it planned the framework primarily based on discussions with customers, as well as drawing on academic ethicists and partners. Workforce issues had a significant impact, with “a lot of cases of automation anxiety” from customers’ employees, as well as specific requests including a large US bank that wanted to ensure that software robots could not be exploited by rogue staff to commit fraud.

Read more about managing employees with algorithms

The company builds robots itself, but also sells use its development platform and although the framework is part of its terms and conditions, it does not enforce compliance. “It’s like when you sell a knife,” says Karev. “Someone can use it to cut salad and someone can use it threaten someone.” The framework will evolve based on two-way communication with customers, he adds.

Many employers are already keen to demonstrate ethical use, however. Karev says the risk of fraud can be reduced by requiring the steps to put a robot into production to be carried out by different individuals, because this would require several people to conspire rather than a single fraudster. If robots are used to monitor employees, they can be set only to use data from business applications.

For a global technology company that uses a robot for CV screening, “we added the guardrail that no rules can be created and applied automatically”, says Karev, and all changes are documented and reversible.

Karev says ethical automation helps to get business from the public sector, which is Nice’s largest vertical market in the UK. In November, it announced that a large UK government organisation was using AI and RPA technologies as part of a digital transformation strategy, including the processing of self-service applications to change payment arrangements and providing real-time guidance to human advisers.

“With that comes high regulation, a highly unionised environment and high demand for very strict ethical behaviour,” he adds.

Read more on HR software

CIO
Security
Networking
Data Center
Data Management
Close