This is a guest post for the Computer Weekly Developer Network written by Jason Knight in his capacity as co-founder and VP of Machine Learning at OctoML, a company known for its acceleration platform that helps software engineering teams deploy machine learning models on any hardware or cloud provider service.
Knight writes in full as follows…Over the past 50 years, digital automation has been the primary driving force behind technological progress. Software engineering being one of the main mechanisms of that force. The power of software is the ability to compose abstraction layers on top of each other, each layer enabling greater automation than its predecessor. These layers, coupled with Moore’s law development of computing power, have been a major contributor to the accelerated pace of automation.
In particular, machine learning, in all its forms, promises not only a significant increase in the rate of automation but also a transformation in the nature and scope of automation.
Two streams for LLM
Large language models (LLMs) are a very recent and relevant example. LLMs are poised to take automation to new heights, serving in two streams:
Firstly, as an accelerant.
Enhancing the pace at which knowledge workers can produce automations using existing software tooling and workflows. Due to the ‘hallucination’ effect of LLMs, developers cannot fully delegate tasks to ChatGPT-like agents, as the current error rate is too high. Instead, LLMs will augment knowledge workers, serving as a new type of Google or Stack Overflow to verify their work, navigate API documentation and suggest simple solutions for common bugs.
Second, as a new kind of automation ‘stack’.
Facilitating a more advanced level of automation with the maturation of approaches like ToolFormer, ChatGPT Plugins and LangChain. These advancements will allow for a powerful, high-level API (human language) to construct computational pipelines. Users can create software pipelines with minimal guidance, encompassing queries to external websites, information aggregation and API invocation to act on the user’s behalf.
A program ‘interpreter’
LLMs essentially act as a new program ‘interpreter’ for natural language commands with a powerful in-process datastore of a ‘blurry jpeg’ copy of the entire Internet to rely upon when interpreting user commands.
At first, the only ‘execution’ mode that LLM interpreters could perform was to generate more text for a human user to read, until developers taught LLMs to also emit specially formatted ‘calls’ which would be recognised and evaluated by an outer loop mechanism to be able to ingest additional data, execute Python code (either defined by humans or generated earlier in the ‘execution run’), or call web APIs to book travel or schedule an appointment.
This enables the new world of LLM natural language programs to interface with the existing world of software APIs and datastores.
Is this déjà vu?
But haven’t we already heard this same story from the increasing numbers of low-code/no-code (LC/NC) platforms and products?
While similar at a high-level analysis, LLMs will take the underlying promise of LC/NC and deliver it to an unprecedented level. Traditionally, LC/NC platforms target particular domains and hence have limited scope for accelerating developers. This is because traditional software engineering layers that LC/NC platforms are built on have an inherent tradeoff between flexibility and abstraction: the higher the abstraction, the lower the flexibility that a user has either within a domain or to apply across domains.
On the other hand, LLM’s offer their own high level of abstraction/automation that LC/NC platforms do (arguably higher) AND are general purpose in nature as opposed to their LC/NC cousins. This is due to increasingly seamless integration of various APIs and services and the general-purpose nature of LLMs based on their training across such broad swathes of internet data.
The way that developers use LLMs to accelerate the pace of automation is rapidly evolving – on a weekly if not daily basis. So it is still too early to accurately predict the exact path of future progress, but one thing is for sure, LLMs have kicked off an automation inflection point.