We users use Artificial Intelligence (AI) almost every day, often without even realising it i.e. a large amount of the apps and online services we all connect with have a degree of Machine Learning (ML) and AI in them in order to provide predictive intelligence, autonomous internal controls and smart data analytics designed to make the end user User Interface (UI) experience a more fluid and intuitive experience.
That’s great. We’re glad the users are happy and getting some AI-goodness. But what about the developers?
But what has AI ever done for the programming toolsets and coding environments that developers use every day? How can we expect developers to develop AI-enriched applications if they don’t have the AI advantage at hand at the command line, inside their Integrated Development Environments (IDEs) and across the Software Development Kits (SDKs) that they use on a daily basis?
What can AI can do for code logic, function direction, query structure and even for basic read/write functions… what tools are in development? In this age of components, microservices and API connectivity, how should AI work inside coding tools to direct programmers to more efficient streams of development so that they don’t have to ‘reinvent the wheel’ every time?
This Computer Weekly Developer Network series features a set of guest authors who will examine this subject — this post comes from Adam Lieberman in his position as head of Artificial Intelligence & Machine Learning at Finastra… and Dawn Li, in her role as data scientist, also at Finastra — the company is known for its open platform that accelerates collaboration and innovation in financial services.
This is part one of a two-part commentary with the accompanying sister story located here — Lieberman & Li write as follows…
There are two main spaces where we’re seeing AI for the developer toolset. Integrated Development Environments (IDEs) and Software Development Kits (SDKs) in which code assistance is a feature are increasingly common.
Kite, for example, has created an IDE plugin that has code completion as a feature. So, when developers are writing functions, a machine learning algorithm analyses other segments of code to predict the rest, finishing off the function.
While plain text editors are still used by many experienced developers, IDEs are pretty much standard within the software development community. There are hundreds to choose from, but perhaps the most useful are those with more advanced code assistance tools. From debugging tools to colour palate presentation to machine learning based code completion, IDEs are taking the development world by storm. Platforms like Pycharm have over 2,000 different plugins in their marketplace aiming to help assist the software developer.
Perhaps the most compelling examples of code automation are those recently created using the impressive newly released transformer model. GPT-3 is a huge machine learning model, mostly focused around natural language processing and the use cases created by developers, following the release of an API that allows interaction with the model, are striking.
One application lets the user request an entire model using nothing but text.
For example, a user might write, in standard English text: “I want a deep learning model with five layers that takes in an input of a 256 x 256 image and outputs a probability across 3 different classes.” Low and behold, the application will deliver the code for the model.
Multiple databases necessitate the occasional running of complex SQL queries. One GPT-3 application can understand text requests for SQL queries and deliver the code for the query. From here it is of course possible to automate the running of the query, saving even more time.
The model is not yet perfect, but as it matures, the potential use cases are practically limitless.
No flying blind, please
As with any discussion around code automation and AI, there are some concerns that low-level development tasks (and even jobs) will be eliminated, but this is simply not true. Just as pilots must possess the skillset to fly “analogue” aircraft, so too must developers possess the skills to write and fully understand their code. Error detection and the automation of low-level tasks is focused on reducing lead times, not responsibility.
Even as GPT-3 reaches peak performance, the applications created from it should very much be viewed as efficiency tools.
More automation across the software development pipeline ultimately allows us to focus more on creating the solutions of tomorrow. From a data science perspective, tools that help to build models quicker are always welcome – and if we can continually build better models that mature by analysing historical data, we can expedite the code-writing process.
As a data scientist, I could rewrite every algorithm I use from scratch, but this is an extremely inefficient use of time and skills when I can easily use a model by accessing a Python library via an API. Does this mean I will eventually lose the skills I need to understand and build algorithms? Of course not. If developers want models to write them code, they must understand what they’re asking the model to do. To get the answer, we must first know the question to ask.