AI Workflows- Appian: Up the abstraction ladder (not down the rabbit hole)
This is a guest post for the Computer Weekly Developer Network written by Medhat Galal, in his role as SVP of engineering at Appian, where he leads process automation transformation.
Galal writes as follows…
It’s become fashionable to point out that AI agents are failing to meet expectations.
But from a software development perspective, the reasons why this is the case are clear. AI agents need higher-level tooling and higher-level abstraction in the orchestration layer to be able to do higher-level work.
At present, we’re stuck with AI agents that are automation-oriented.
Their behaviours are defined in a question-and-answer format. In a typical cycle, the user has a question, the agent searches all available knowledge bases for the answer, and returns it to the user.
But the ultimate goal of an AI agent isn’t to return information its user wants, it’s to solve the problem the business has. We’re not going to get there by optimising single user-to-agent interactions. Rather, we should be thinking in terms of hundreds or thousands of agents, working with each other and with humans, to address complex, interdependent problems.
To elevate AI up this abstraction ladder, the same needs to happen for the inputs it receives.
Up the abstraction ladder
We’ve seen this pattern before: early software ran on bare metal using assembly and other low-level languages. Over time, higher-level languages like C, Java, and Python emerged, each adding layers of abstraction and enabling more complex applications.
Some would argue that AI is another stage in this ongoing process. While current integrations excel at basic information retrieval, solving real problems demands more. Agents need input not just from data retrieval, but the component business processes and tasks that are related to the problem, to be able to return a higher-value analysis.
Ultimately, getting more value out of AI means getting higher-level, more human-like outputs that go beyond answering the question, and closer to solving the user’s underlying problem.
From rules to reasoning
AI agents today can be classified into four levels of increasing sophistication and autonomy.
- Today’s AI agents could be classified as ‘level 0’ and ‘level 1’. Level 0 agents are prescriptive, with pre-defined sets of rules that significantly limit the outcomes they can achieve.
- Level 1 agents are assisted with AI to enable them to help with discrete tasks like document classification and information retrieval. They’re automation-oriented agents, with basic access to databases, integrations, email systems and the like – whatever is needed to perform the discrete tasks they’re asked to do.
For example, an insurance claims processing agent will have access to at least the above information sources to perform the function of assessing inbound claims against pre-set criteria and route its provisional decision to a human for validation.
For many organisations, these level 0 and level 1 agents may deliver big productivity gains that are worth the investment. But there’s more potential here. These tools are narrow-ended in what they do and aren’t helpful for the higher-level reasoning tasks that most businesses today are ultimately hoping AI agents will achieve.
We need better tooling, otherwise we’ll be stuck playing ‘fetch me the email’ with agents.
From discrete tasks to solving problems
3. ‘Level 2’ agents are AI-automated. In this scenario, the organisation is assembling discrete agents in a combination to achieve goals above and beyond the basic discrete tasks described above. For example, rather than simply reading a document or fetching an email, specialised agents performing different functions are orchestrated into a process that runs from classifying inbound emails or documents; extracting relevant information; routing information; writing information to the relevant database.
This orchestration of discrete agents enables organisations to achieve more complex operational goals. Although some have reached this stage, it remains surprisingly rare, given this is the type of set up that gets you closer to actually solving business problems.
4. ‘Level 3’ agents are goal-oriented. Returning to the insurance claims example, what if the user’s ultimate goal is to assess the potential impact of an upcoming policy change on the company’s customer retention? In this case, what the user really needs is a high-level synthesis – the answer isn’t a straightforward ‘fetch’ job.

Appian’s Galal: There are a wide range of other considerations when it comes to implementing goal-oriented AI agents in a business.
It has to reason with the information it has in the context of the wider goal, identifying the relevant information, assessing whether they have the potential to affect the outcome and escalating them to a human being.
… and it has to do this all without specific flows for using tools to achieve goals.
Higher-level tooling
The next big job for software developers is making this business logic available as a tool to the agent.
You might codify a whole series of complex tasks in the enterprise via MCP that becomes higher-level tooling for an agent. The developer can then codify a defined set of steps, including looking up information, making specific decisions, and driving certain actions, orchestrated as part of a higher-level reasoning process that addresses a specific business goal.
There are a wide range of other considerations in addition to these technical ones when it comes to implementing goal-oriented AI agents in a business. These include strategic goals, change management within the organisation, and safety & compliance.
These all deserve serious consideration, especially because from a technical perspective, it is possible.