REDPIXEL - stock.adobe.com

AI productivity gains could result in four-day weeks for millions

Autonomy think tank publishes paper on potential for artificial intelligence-driven large language models to shorten people’s work weeks

Automating jobs with large language models (LLMs) could lead to significant reductions in working time without a loss of pay or productivity, says Autonomy think tank, but realising the benefits of artificial intelligence (AI)-driven productivity gains in this way will require concerted political action.

In a paper published 20 November 2023, Autonomy forecast that AI-led productivity gains could enable 8.8 million UK workers to have a four-day work week by 2033, while just under 28 million could have their working hours reduced by 10% in the same time, if LLMs are deployed in the right way.

Noting this would represent 28 and 88% of the UK’s roughly 32 million-strong workforce respectively, Autonomy said while the latter scenario does not necessarily mean a four-day work week for most, it would still mark a significant shift in the world of work.

It added that there are significant opportunities for local authorities in particular, with the potential for 44 authorities across England and Wales to have at least one third of their labour force eligible for a four-day workweek by 2033. Out of these local authorities, 18 are London based.

“Our research offers a fresh perspective in debates around how AI can be utilised for good,” said Autonomy’s director of research, Will Stronge. “A shorter working week is the most tangible way of ensuring that AI delivers benefits to workers as well as companies. If AI is to be implemented fairly across the economy, it should usher in a new era of four-day working weeks for all.”

Autonomy noted that although people have long been predicting and expecting far shorter working weeks due to technological advances, historical increases in productivity over recent decades have not translated into increased wealth or leisure time for most people, largely as a result of economic inequality.

It said there is often a sense of pessimism around AI-driven productivity gains, with most conversations emphasising the potential for job losses and degraded working conditions, but that such gains could also be used to deliver shorter working weeks for many while also maintaining their pay and performance.

“Such a policy offers the possibility of avoiding mass unemployment (and all the social and political effects of this), reducing widespread mental health illnesses as well as physical ailments associated with overwork and creating significant additional free time for democracy, leisure consumption and social cohesion in general,” it said.

“In the case of the UK – where work-related stress, anxiety and depression constitute one of the most significant labour market issues today – these wellbeing factors cannot be emphasised enough when it comes to the productivity question. Thus, we can expect a great deal of extra productivity-enhancing side effects of the shorter work week, outside of the AI-augmentations we have modelled.”

It added that a previous UK trial of the four-day work week – which ended with most firms involved deciding to continue with shorter weeks on a permanent basis – showed that many enterprises simply do not need to spend extra money, or lose productivity, when shifting staff to shorter hours, particularly if their job is desk-based.

“The gains of work process reorganisation and evaluation, greater staff health, improved staff loyalty, reduced sick days and greater retention accrued through better work-life balance give a significant boost to performance,” it said.

However, Autonomy is clear that productivity gains are not always shared evenly between employers and employees, and depend on “geographic, demographics, economic cycle, and other intrinsic job market factors” such as workers’ access to collective bargaining.

“This is a paper that identifies an opportunity and not a destiny. The actual diffusion and adoption of technology is always uneven, driven by a variety of factors: wage levels, government policy, levels of sector monopolisation, trade union density and so on,” it said.

“Needless to say, widespread adoption of these new AI technologies will require a robust industrial strategy that traverses national, federal and municipal levels and that deploys incentives and regulations for the private sector.

“Most importantly, workplace technologies are social and political technologies and therefore worker voice – those who will be working alongside and in collaboration with these tools – will be essential.”

To deliver positive AI-led changes for workers and not just employers, Autonomy recommends setting up “automation hubs”, underpinned by trade union and industry agreements, to boost the adoption of LLMs in ways that are both equitable and lead to a reduction in working time.

“These hubs would also aim to increase adoption in sectors that have seen low-investment – through whichever financial and incentives are made available,” it said. “These hubs could have branches for each employment sector and each branch – perhaps at local authority level – would have specific expertise regarding the nature of the work in question and the AI technology that is most relevant.”

Like Autonomy, others have also outlined the tension between the benefits of AI and clear issues around who these benefits are distributed to.

In September 2023, for example, the House of Lords launched an inquiry into the risks and opportunities presented by LLMs, which heard from witnesses about the power asymmetries between governments and developers of the technology in its first session.  

In his written evidence to the inquiry, Dan McQuillan, a lecturer in creative and social computing, also highlighted the risks of outsourcing of decision-making processes to such AI tools.

“The greatest risk posed by large language models is seeing them as a way to solve underlying structural problems in the economy and in key functions of the state, such as welfare, education and healthcare,” he wrote.

“The misrepresentation of these technologies means it’s tempting for businesses to believe they can recover short-term profitability by substituting workers with large language models, and for institutions to adopt them as a way to save public services from ongoing austerity and rising demand.

“The open question is how much of our existing systems will have been displaced by large language models by the time this becomes clear, and what the longer-term consequences of that will be.”

Outlining the “profound implications for employment”, McQuillan further added the net effect of LLM deployments within the current balance of social forces “is the acceleration of precaritisation, outsourcing and privatisation”, which means the technology will be “used as an opportunity to transform social systems without democratic debate”.

In October 2023, the Ada Lovelace Institute published a review on UK public sector uses of AI foundation models (including LLMs), noting the technology could used in, for example, document analysis, decision-making support, and customer service improvement.

The Ada Lovelace Institute further outlined claims that using AI in this way will lead to greater efficiency in public service delivery, more personalised and accessible government communications tailored to individual needs, and improvements in the government’s own internal knowledge management.

“However, these benefits are unproven and remain speculative,” said the institute, noting that while there is optimism in both the public sector and industry about the potential of these systems – particularly in the face of tightening budgetary restraints and growing user needs – there are also real risks around issues such as bias and discrimination, privacy breaches, misinformation, security, over-reliance on industry, workforce harms and unequal access.

It further added that there is a risk such models are adopted by the public sector because they are a new technology, rather than because they are the best solution to a problem.  

Read more about artificial intelligence

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close