AI developer toolset series: Synopsys on 3 major factors that define the 'type' of your AI

The Computer Weekly Developer Network is in the engine room, covered in grease and looking for Artificial Intelligence (AI) tools for software application developers to use.

This post is part of a series which also runs as a main feature in Computer Weekly.

With so much AI power in development and so many new neural network brains to build for our applications, how should programmers ‘kit out’ their AI toolbox?

How much grease and gearing should they get their hands dirty with… and, which robot torque wrench should we start with?


The following text is written Boris Cipot, senior engineer at Synopsys — the company is known for its Design Compiler product, a logic-synthesis tool. Synopsys also offers products used in the design of an application-specific integrated circuits.

When we talk about Artificial Intelligence (AI), most people are referring to applications that are using complex code in the background to deliver appropriate responses to given queries. Implementing those speech recognition or image processing functionalities can be a complex task.

However, the complexity of AI is not defined by the ‘type’ of AI.

Rather then, to asses what shape our AI is and what elements go towards bringing it into being, we need to take several different factors into account when starting the implementation work.

Factor #1: language

The first factor is the programming language we choose for AI programming, which libraries are available for use… and what functions they provide.

Factor #2: mathematical functions for ‘decision logic’

The second factor to consider involves the necessary hardware required to process speech, images and other sensory elements taking place, along with the mathematical functions that then give enough data for the decision logic to choose the appropriate response.

Most of this processing runs on powerful servers rather than on smaller devices like mobile phones. But there are also other cases – for example the automotive industry – where data connectivity is not a given and the decision logic cannot be dependent on a fragile factor such as data connectivity. There, the powerful processing hardware must be in the car and be able to process data from sensors (e.g. LIDARS) in real time.

Factor #3: learning dataset

The third factor is the ‘learning dataset’ we provide the AI with and where we can then refine the responses based on different inputs. Those datasets need to be massive to capture all possible variants to which the AI can react. Imagine how many languages AI would need to understand to be of use to the entirety of the world’s population.

Now consider all the different accents for each language so that users are able to understand the sentence at hand and its context. This is just one example of how massive and complex these datasets can become.

Open source + & –

There are also many open source libraries available for AI programming. This is promising as it means that there are many groups and individuals working on the question of how to develop the ultimate AI. However, open source requires additional research for the programmer to understand who is behind the library, how it maintained… and how it can be used from a legal perspective.

Meanwhile, on the proprietary side, there are companies that provide architectures for development and testing, providing hardware technology to do it all. One example is Nvidia CUDA.


Others, like Amazon or Google, provide you with access to their AI in the cloud where they are already teaching their AI functionalities with huge datasets provided by their customers. Those services present the possibility to lower the complexity of the implementation; but on the other hand, you may not always have the choice of utilising services.

For instance, services cannot be leveraged in all use cases within the automotive industry.

Implementing AI with Lego Mindstorms or for smart companions (e.g. ubtrobot) can be made as simple or as complex as programmers choose to make it based on the set of limitations involved.

At the end of the day, we should also consider how we plan to use the implemented AI and the results of a wrong decision – as AI tech isn’t yet (and will not soon be) issue free. We must contemplate what happens to the data (i.e., personal data) users provide and if security is being implemented as a top priority.

< class="wp-caption-text">Cipot: the complexity of AI is not defined by the type of AI.


Data Center
Data Management