This is a guest post written for the Computer Weekly Developer Network by Shomron Jacob in his position as head of applied Machine Learning and platform at Iterate.ai.
Iterate.ai is known as a company that provides what it calls an ‘AI innovation ecosystem’ that enables enterprises to build production-ready applications.
Jacob’s full title for this piece is: As LLMs mature, developers must hone their strategies – now – to gain an upper hand.
He writes in full as follows…
Already, generative AI and Large Language Models (LLMs) can transform developers’ day-to-day practices and the efficiency with which they can deliver innovative applications and CX differentiation to market.
The question is how best to do it.
Code generation LLMs have just recently crossed over from promising-future-technology-to-keep-an-eye-on to complete solutions ready for primetime enterprise developer adoption. The proverbial cat (one capable of automatically writing effective code) is out of the bag and those LLMs will only become more accurate and more transformative to developer teams moving forward.
As that unfolds, differentiation in developer team capabilities and the results they deliver will become a pure function of LLM strategies. Teams backed by private LLMs utilising rich stores of unique data – and that best navigate around LLM impediments such as AI hallucination – will command decisive advantages.
Gen AI hype yields to reality
For all the excitement over generative AI in the past year, many developers and businesses found themselves on the sidelines, curious about how to actually harness practical value. But one opportunity is now clear: more recent advances in code-generation LLMs mean that developers can provide natural language prompts and the LLM will provide fully functional and usable written code.
For development teams that embrace this technology, the immediate dividends include massive productivity gains.
Developers accustomed to traditional coding methods and tedious block-and-tackle work can shove those tasks onto a code-generation LLM for reliable and automated completion, while still keeping the most interesting, innovative and strategically-valuable work for themselves. Developers will no doubt harness these LLMs to augment that innovation as well. Teams thus empowered will iteratively improve their organisation’s products and CX more rapidly – developing and shipping features with a pace and agility that competitors generating code by hand can’t hope to match.
The machine-learning nature of code-generation LLMs also means that the technology’s benefits will only snowball as it advances into the next year and beyond.
In shirt, code-generation LLMs are primed to augment low-code strategies.
Developers that already utilise low-code strategies to accelerate application development will find that code-generation LLMs slot right into those strategies. While low-code uses code abstraction to streamline the developer experience and reduce obstacles between developers and the products they envision, arriving low-code code-generation tools will fully obliterate those barriers. Low-code and code-generation LLMs will sift the cruft out of developers’ day-to-day activities, with big impacts on productivity and developer satisfaction. As a result, developer teams enabled by low-code code generation will build a clear competitive edge over rival teams that don’t adopt and adapt to utilising LLM support.
Let’s look at how building private code-generation LLMs will prove to be a winning strategy.
Code-generation LLMs will be available to all: public LLM APIs will make sure of that. However, developers at enterprises that choose the simple route of leveraging public LLMs will find it far more difficult to differentiate their applications and features. Public LLMs trained on data shared by every organisation utilising that LLM will inevitably produce results that feel, well, similar.
Some things should remain private and an organisation’s unique proprietary data is about the strongest example there is. Enterprises that make the major investments required to harness expensive computing resources and train their own private code-generation LLMs will create IP that its developers alone benefit from. Developers supported with private LLMs can therefore produce applications offering capabilities and experiences that stand apart amongst competitors, driven by data only their LLMs can access. That differentiation will only become more crucial as generative AI matures and code-generation LLMs become ubiquitous.
Using LLMs the *right* way
Developers who understand the strengths and weaknesses of LLMs and adopt mature utilisation strategies will thrive in comparison with those who do not.
For example, developer toolsets should still include traditional AI functionality, which is more efficient from a compute perspective and delivers faster query responses. Traditional AI is also the better strategic choice in sensitive and high-risk use cases where reliability is a greater priority than delivering a unique customer experience: financial transactions are such a case. LLM utilisation strategies must also keep major risks such as AI hallucination in check.
Developer teams that apply the right tools to the right tasks (and implement checks and balances to ensure optimal LLM results) will realise the greatest benefits from the technology – and they can unlock those benefits now.