Get set for interfaces that know what you’re doing before you do
This is a guest blogpost by John Bates, CEO of SER Group, a European-based enterprise content management vendor. User interfaces have come a long way from the days of green screens — but the way we work with computers is about to take a fascinating new turn, he says.
When I began in this field, commercial data processing often relied on batch, PL/1, fourth-generation programming language and character-based green screens delivered via virtual desktop infrastructure (VDI). We’ve made incredible strides since then, but I believe we’re on the verge of an even greater leap forward—one that will make even the most advanced GenAI interfaces feel as outdated to knowledge workers as those early systems do now.
This might sound like a bold statement, especially considering how seamlessly we can pose business questions to our systems today, almost as if we were speaking to a colleague. Yet, I firmly believe we are nearing a pivotal moment where our systems won’t just react to queries but will evolve into intelligent, proactive collaborators.
Beyond the passive human-IT of today
What I’m really focused on is the shift from today’s static, backward-looking approach to managing our content, to a future where the computer handles 95% of the work. We’re now in an era where, thanks to GenAI-based interfaces, we can interact with software systems using prompt-based natural language. However, it’s clear to me that this is becoming complex! Prompt engineering is emerging as a new science—almost its own branch of programming. It requires hiring someone that is both a knowledge expert and computer science-trained—able to ask the right questions to get the desired result. This doesn’t feel like the right direction.
However, I’m seeing the early stages of a new world where the software takes the lead, offering insights, suggesting workarounds, and providing behind-the-scenes support to maximise your efficiency in daily business tasks.
Doing so would give us what I call not just a passive interface, as we have now even with the highest-paid for versions of GenAI interfaces, but an anticipatory one. Imagine working in a framework where the computer never waits for you to notice that an email with a request for a business action had come in; it’s already spotted it, applied the business rules you’d taught it, and either filed it, actioned it, or raised the right query back. Similarly, imagine that instead of having to ask for a summary of the meeting, a pop-up asks you to click if you want a summary, a list of the actions and details of all attendees.
And so, has it taken away your job? I’d argue quite the opposite. It’s actually saved you 45 minutes of menial typing, checking, and distractions, allowing you to focus on the real challenges of the day—like salvaging a critical account, or strategising a response to a competitor’s seemingly game-over move.
By the same token, this would mean never needing to open a contract but simply asking—by voice or text—what type of contract it is, who the parties are, when it starts and ends, and other salient details. That means the document, whether it’s a contract, invoice, RFP, or CV, would actively and intelligently provide not only the answers but also suggest the best response or action based on its contents and your needs.
What we’re really aiming for is to make our business content more informed, connected, and, if you will, self-aware. (I sometimes use the term ‘sentient’ to capture this idea.) After all, computers are designed to quickly gather useful information from multiple sources; if we could just push this capability a little further, why couldn’t they begin to initiate low-level, routine actions on our behalf? This is where so-called ‘Agentic AI’ meets next generation document management.
Given the incredible advancements in translation software, this capability could extend seamlessly to multinational, multilingual contexts. Imagine receiving a query from your Taiwan office, having it instantly processed in your language, and then sending it back—perfectly translated into the appropriate local language.
That sounds incredibly useful, and many organisations, including some of our customers, are already achieving something like this. But what I’m envisioning is an interface that goes far beyond simply executing tasks. I want one that waits patiently and proactively, ready not just to follow my instructions but to offer 20 brilliant suggestions on how to improve my work or save time—all without me having to ask.
Making the most of each and every interaction
Hence, the idea of an anticipatory interface. But let’s rethink what we mean by ‘interfaces’ altogether—it’s time to move beyond screens and keyboards. Imagine a smart earpiece or glasses (privacy laws permitting) that discreetly informs you at an industry event: the person approaching is Federico, a customer about to place a €10M order. His partner is called Christian, and he has a red setter called Simon.
Once you start imagining IT that anticipates what’s useful for us, its value becomes clear. While we’re not there yet, advances in AI, metadata integration, and technologies like augmented and mixed reality could make 2025 the year we look back on and say, Remember when we had to tell computers what to do? It’ll feel as quaint as reminiscing about VDUs and tape storage.