Will 2026 be the year that Augmented Intelligence Trumps Artificial Ignorance

Forty years ago I forecast that the impact of machine learning would be that book-learning and logical reasoning would lose status just as literacy did when everyone could read and write. Two years ago I asked ” Will 2024 be the year AI begins to transform Education, Recruitment, Training from cradle to dotage? “.

We can now see that happening with a sharp fall in graduate recruitment, a plateau in applications for full time degrees and Schools and University finances under mounting pressure, including to cover the cost of recent expansion. Both sectors have responded with consolidations, mergers and shared procurements to cut the cost of transitioning to customised on-line learning, upgrading the reliability, resilience and security of educational broadband and mobile connectivity and making bulk deals for access to content.

One of the first fruits of the new spirit of collaboration between Universities  is the content deal recently renegotiated between JISC and the Big Five educational publishers  . This “resets the UK’s open access agreements to achieve greater equity, inclusivity and sustainability“,  without opening academic publishing community to being plundered by AI tools which do not also pay for access. The contract terms will, however, almost certainly lead to smaller universities having to come “under the umbrella” of those creating global (not just intra -UK) networks of local access lifelong learning campuses and study centres (the JANET and JOHN strategy).

The schools market has polarised between academy chains, (with in house technical support), transitioning to a mix of  Microsoft 365 and Teams while local authority schools, (without such support), transition to that which is uncharged, such as Google for Education. Meanwhile London Grid for Learning had brought its network inhouse and connected direct to LINX  a year before DfE upgraded its guidance on digital and technology standards,

In three weeks I plan to attend the 41st “British Educational Training and Technology” show ( Bett UK:   21-23 Jan 2026,) to catch up with who is trying to sell what to who in an increasingly difficult market where most widely used products and services already use “augmented intelligence” under human control to greatly improve schools productivity, using much better use of teachers’ time to educate pupils with diverse needs, not just meet current ministerial targets.

In summary …

Whether planned or not, the current hype for AI means that 2026 is likely to see the biggest change to UK educational funding and status since 1536. when the chantry schools and Oxbridge colleges supported by religious orders were closed or “merged” when a bankrupt King needed money after the printing press had removed their monopoly of learning and literacy. Meanwhile the Pope and Holy Roman Emperor tried, in vain, to use censorship to resist the tide of change as medieval scholasticism was replaced by secular academic disciplines across most of the Universities of Europe.

Today we see similar challenges to accepted authority and group think. Current AI expectations and valuations may be as unrealistic as those in the dotcom bubble, but global access to digital technology and communications, from algorithmic programming to social media. was shrinking the world, and opening it to economies of scale  before Covid accelerated the process .

Now we are living with the consequences of the Covid lockdown –  from the rush on-line to the realisation that the authorities were more successful in modelling and manipulating public behaviour than in reducing the impact of  a virus that did little harm to the fit and healthy. We now face the backlash.

But it is not just humans who cannot tell the difference between click bait and authoritative analysis. We are asked to believe (as in The Bitter Lesson by Richard Sutton) that using general purpose AI products and services to collate vast amounts of “Big Data” from on-line sources which do not charge, in preferable, at lest in theory, to using self-auditing tools which build on the curated wisdom of the past, citing checkable sources and paying for access to legal or academic publication data bases as necessary. We see calls for Universities to drop the requirement for students to provide checkable references to support their arguments as opposed to learning how to check the statements, including that references are genuine

At the same time we are being asked to discuss the “ethics” of AI, instead of using the technology itself to help digest and apply 300 years of evolving Copyright and Patent Law, 200 years of evolving product liability law and 150 years of law covering electronic communications and content.  If AI is capable of what is claimed, then it is capable of  making sense of the applicable law that the digital revolutionaries of California think should not apply to them. Their current approach is akin to that of  the film studios who created Hollywood as far from the Patent lawyers of the Edison corporation as they could go. The current state of  legal protection via a vis intellectual property rights may be seriously flawed but open access to all the world’s data for a handful of currently dominant players  is unlikely to be acceptable to those outside the introverted world of AI.

Meanwhile the debate over whether whether technology is a threat to humankind (freedom, jobs, morality etc.) or an augmentation (as the tool to the hand) is not new. I am not sure how much has been added in recent years to the debate in the Colleges of Unreason (Cambridge) summarised in the “Book of the Machines” in Victorian satire Erewhon. That debate led to destruction of all machines invented in the past 271 years (chosen because of the insidious dangers posed by a certain type of mangle much favoured by washer women) and the reactionary civil wars  which nearly ruined the country (the Butlerian Jihad of Dune).

What has happened is that it has become increasingly difficult to find the wisdom of the past,  including most of what was put on-line more that a few years ago, as well as what lies behind paywalls. It is, therefore, essential to educate and train those using AI in the principles of data science. In 1982 I said that one of the key skills of the future would be “the ability to think clearly and express oneself accurately and concisely to get sensible answers from the all embracing information databases”.  I was very pleased earlier this year to discover that this is at the heart of courses of the use AI to improve the productivity of, for example, management consultants.