AI In Code Series: Sinch - Welcome to the world of commodity.ai

We users use Artificial Intelligence (AI) almost every day, often without even realising it i.e. a large amount of the apps and online services we all connect with have a degree of Machine Learning (ML) and AI in them in order to provide predictive intelligence, autonomous internal controls and smart data analytics designed to make the end user User Interface (UI) experience a more fluid and intuitive experience. 

That’s great. We’re glad the users are happy and getting some AI-goodness. But what about the developers?

But what has AI ever done for the programming toolsets and coding environments that developers use every day? How can we expect developers to develop AI-enriched applications if they don’t have the AI advantage at hand at the command line, inside their Integrated Development Environments (IDEs) and across the Software Development Kits (SDKs) that they use on a daily basis?

What can AI can do for code logic, function direction, query structure and even for basic read/write functions… what tools are in development? In this age of components, microservices and API connectivity, how should AI work inside coding tools to direct programmers to more efficient streams of development so that they don’t have to ‘reinvent the wheel’ every time?

This Computer Weekly Developer Network series features a set of guest authors who will examine this subject — this post comes from Pieter Buteneers in his role as director of engineering in ML & AI at Sinch.

Buteneers was previously CTO of Chatlayer – an AI/bots company in Antwerp, the business was acquired by global-cloud communications for mobile customer engagement leader, Sinch last year.

Welcome to the world of commodity.ai

Buteneers writes as follows…

The big cloud providers are doing the best they can to make it really easy for developers to build Machine Learning or Artificial Intelligence solutions.

You can use their APIs to convert speech to text, interpret images, detect anomalies and so on.

In return, you only pay for your consumption. All in all, it’s a very compelling proposition — so they have effectively managed to turn many Machine Learning application or function into a commodity API.

But what if you want to build a solution that the APIs don’t provide? Or what if you want to play with the bleeding edge of Machine Learning? Think for example about GPT3 (released by Open AI), which can write entire strings of human language text with compelling arguments that seem and appear to come from real (human) content experts.

If you need a new AI function, then your gut feeling might tell you to start looking for a Machine Learning engineer or even a team of engineers and get them to start working on it. But did you know that there is no need to make your life so hard? Allow me to illustrate one simple trick that even the most experienced Machine Learning engineers use to go beyond the commodity and get into the bleeding edge zone in no time.

Go beyond the commodity

To build your own Machine Learning models requires lots of data and increasing amounts of computing power. Knowing that, you might think that only companies like Google and Facebook are able to run these experiments — and in a way you are right.

Google, Facebook and the other big guys want to attract new talent working on AI, typically people with a PhD. In academia, they were used to share all their results with the world in scientific papers often including the code – and most of them rose to fame because of it. 

Because sharing is in the DNA of these researchers, the tech giants have better luck attracting them if they are allowed to share their work – and the research that gets shared gets picked up by other researchers around the world and gets improved upon.

The sharing mentality

Buteneers: breaking down AI at the developer (language) level.

Thanks to this open culture of sharing, everybody has access to the bleeding edge of Machine Learning. There are even companies like huggingface.co that have made it their mission to make closed source Natural Language Processing (NLP) models available for everyone. If necessary, they build the model from scratch and fund the compute power required to train these models so they can make them public.

As a developer this means you have access to all these models, often in code that you can run in production with limited tweaking. The bleeding edge is literally at your fingertips.

If you want to take it a step further and you happen to have some data lying around, you can even tweak these models to a task that is specific to your problem. Many of the groundbreaking computer vision models and Natural Language Processing models can be used for various tasks without a lot of work. These deep learning models exist out of multiple layers. The bottom layers extract the necessary information or features from the data and the top layer uses those features to make a final decision.

If you want to make your own image classifier, you can download the model that won the most recent ImageNet competition, remove the top layer and replace it with your own. With the NLP models like BERT, you might not even need to throw the top layer away. You can use its output as an input for your model.

(…and here’s the good news)

At this point you are probably wondering what the model is that you need to put on top. It sounds like you will have to do Machine Learning all on your own… and the answer is yes. But the good news is that the most difficult part has already been done for you. You don’t need to use any of these fancy deep learning models that are notoriously difficult to train. You can go out and use the simple models that have been built in the 90-ies when compute power was a scarce resource.

Out of the box, Python methods like the linear_regression.RidgeCV method of Scikit-Learn are extremely well documented and they will do all the heavy lifting for you. You just need to find a good set of examples and run it through the deep learning network. Use the output of this network as input for this RidgeCV method together with your desired output. Out comes your own Machine Learning algorithm that often achieves state-of-the-art performance.

If you want to kick it up a notch and make it a tiny bit harder on yourself, you can use the xgboost.cv method of the xgboost: Extreme Gradient Boosting toolbox instead. Although you’ll have to set quite a bit of the parameters yourself, this method has won so many Machine Learning competitions on sites like Kaggle, that you can find tutorials in every language on how to do it.

Now that you know the secret Machine Learning magic trick that experts have been using, Pandora’s box is open.

You can now build your own bleeding edge Machine Learning solutions as a developer, like we are doing at Sinch.

If you are curious on how far you can push the envelope, be sure to check out the language independent NLP of the Chatlayer.ai bot platform. It allows you to train your bot in English and have it understand 100+ languages out of the box, similar to how a human can understand multiple languages at the same time and without using any translation engine.

Content Continues Below

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close