AlienCat - stock.adobe.com

How Imec hopes to industrialise artificial intelligence

Pure research explores the fundamental questions of artificial intelligence, but applied research takes those ideas to industry, making it all worthwhile

The Interuniversity Microelectronics Centre (Imec) is one of the world’s top 10 research and technology organisations (RTOs), performing both pure and applied research – and, most importantly, spotting promising research that is ready for industrialisation, and preparing that technology for use in industry.  

To do all this, Imec brings together industrial players to cooperate on technological innovations that will have an impact over the next 10 years. The partners share talent and intellectual property as well as cost and risk. The Belgian RTO also works with university partners, both in Belgium and abroad – and some of the researchers at Imec are also affiliated with universities.  

A case in point is Steven Latré, who leads artificial intelligence (AI) research at Imec and is also a part-time professor at the University of Antwerp. His primary area of focus is in combining sensor technologies and chip design with AI algorithms to provide solutions in the sectors that need AI most, such as healthcare.  

According to Latré, AI is nearing a critical phase where the algorithms demand more computing power than the hardware can provide. Some of the work Imec is doing might help to alleviate that bottleneck. The research organisation has its eye on those challenges and is also focused on three areas it thinks will have the most impact on industry in the near future – cloud AI, edge AI, and AI in healthcare.  

Cloud AI  

Machine learning algorithms are becoming very greedy in terms of computing resources. Popular systems, such as DALL-E and Generative Pre-trained Transformer 3 (GPT-3), require a great deal of computer power. DALL-E generates digital images from natural language descriptions, while GPT-3 uses deep learning to produce human-like text.  

“The amount of intelligence is very high in these systems, and they produce beautiful results,” says Latré. “The downside of that is that the amount of computation that they need is really blowing up right now. If you look at the evolution of the demand for computing power and then compare that with the evolution of computer performance, you see that there will be a problem very soon.

“Up until about 10 years ago, these kinds of AI models were increasing their demand for computing power by a factor of about two every two years. This aligned very nicely with the evolution of CPUs, which also doubled in power about every two years.”

“However, over the last 10 years, we’ve seen the demand for processing power increase by a factor of 10 every year. This means that whatever we build today, by two years from now the computing speed will have doubled, but the amount of computation that will be required by AI will have grown by a factor of 100.”

With a growing number of huge AI models, each with growing levels of intelligence, progress in AI will hit a bottleneck very soon. Imec is looking at ways of solving this problem – and researchers think the best option is at the intersection of software and hardware, building new computer system architectures that can handle the workload and evolve hand-in-hand with the algorithms. 

“We run the risk of having another AI winter, where these systems will kind of work, but will be way too slow to do it all. That’s what will happen if we do nothing about it now”
Steven Latré, Imec

The training phase of AI is just one of the areas where compute needs are increasing very rapidly. During this phase, machine learning systems crunch enormous sets of labelled data and develop neural networks. But it doesn’t stop there. An increasing demand for computing power is also occurring at other phases in the development and deployment of AI systems. 

The models generated in the training phase are very big, with a huge number of neurons and parameters. When these models are deployed in an application that runs 24 hours a day, seven days a week, the requirements for computing power will be similar to what is needed at the training phase. The demand will increase by a factor of 10 every year. 

“We already see that with the DALL-E and GPT-3 type of environments,” says Latré. “They aren’t something you simply download on your computer and just press play and you play with it. Five years ago, you could do that. Today you can’t.

“If you look at the criticism of deep learning 20 years ago, people were not saying that it didn’t work, they just said it was too slow to ever build something tangible enough. Computing power grew enough to accommodate those algorithms, but the same problem will occur in five or 10 years from now. We run the risk of having another AI winter, where these systems will kind of work, but will be way too slow to do it all. That’s what will happen if we do nothing about it now.” 

To boost the performance of cloud AI, Imec is building new computer system architectures, technology that will allow designers to build the next generation of supercomputers – so computational performance can catch back up with the requirements of AI algorithms and evolve hand-in-hand with them. Latré thinks the way to meet this increasing demand is not to focus on hardware or software in isolation, but to work on the two together. 

Edge AI 

The second topic Imec is working on is edge AI. While it is important to have huge computing resources in the cloud, it is also important to have some of the power of the current models on smartphones or on wearables – and the models should be personalised. 

“Think of an AI system that is constantly helping me out in how I exercise,” says Latré. “An Apple Watch system could measure my entire lifestyle on the device itself, and it could give me personalised recommendations. That already requires a very complex AI model, but now add to that the fact that it is personalised for me, and things become even more complicated.”

In some ways, the challenges with edge AI are similar to those with cloud AI – but there are some important differences. Energy consumption is a bigger concern on edge devices. You can’t burn as much energy as you can in the cloud. After all, edge technology runs on very small batteries, which means it requires different types of machine learning. 

Imec takes the practical approach that the more you know about the application and the sensors, the more efficient you can make your AI model. There are a lot of optimisations that can be made to reduce energy consumption – and that is in stark contrast to cloud AI, which needs to perform general compute services. 

While the AI community is lagging in building specific algorithms for edge applications, Imec feels it is important enough to be considered an area of focus, and is working on it now.  

“We are really looking at the tight integration with sensors,” says Latré. “If you know the type of sensors you have and the kind of application you want to develop, you can then design the AI compute model to fit it. One of the advantages we have as a large research organisation is that we have the range of expertise needed for this integrated approach.”

Imec already possesses some of the key technologies and skills. It has been building sensors for years, it has algorithms that are tailored to sensors and it builds specific AI computer chips – current generation deep learning types of chip. It is also researching neuromorphic chips, which mimic how the human brain functions. The brain, after all, uses very little energy to perform tasks that no modern AI algorithm can even touch. 

AI in healthcare

The third area of focus for Imec is healthcare. A fast-growing domain is personalised medicine – personal treatments that rely on sensors, and where AI is often the right technology to produce insights from vast amounts of raw data produced by sensors. But a specific type of AI is required in this case – and for two reasons. 

The first reason is that while it is a trivial matter for a human to look at training data and identify labels that should be fed into a machine learning model, it is very difficult for an AI to figure out labels by itself. But in health applications, a system needs to find patterns in data where humans cannot identify categories. This requires unsupervised learning, which is a completely different type of AI. 

The second reason AI is different for personalised medicine is that it requires more than just image recognition. For AI models to be used in healthcare, three other aspects must be present at a very high level – explainability, trustworthiness and privacy. 

Explainability is about making it possible for humans to understand how an AI model makes its decisions. The training phase produces huge neural networks that are impossible to understand. For AI to be used in healthcare, there needs to be more explainability.  

“Visualisations are often used to explain how an algorithm has arrived at a certain decision,” says Latré. “Consider, for example, a heatmap that visualises which elements in a photo are important to the AI ​​system. Very often those visualisations are not so clearly interpretable at all. Therefore, more recent research focuses more on using language to help AI explain, in an understandable way, how it arrived at a decision.”

Then there is trustworthiness. AI models take an input and always produce an output, regardless of how sure they are about the output. The degree of certitude is not often provided. But in healthcare applications, the point is to partially replace doctors by a computer. The algorithm gives medical advice that a doctor may then use. For this to work, the AI has to be as trustworthy as a doctor, which means it needs to express doubt.   

“There are a lot of cases where AI is already doing very well – maybe in 99% of the cases,” says Latré. “But in the 1% of the cases where it fails, it fails miserably. That is definitely not what you want in healthcare.” 

“There are a lot of cases where AI is already doing very well – maybe in 99% of the cases. But in the 1% of the cases where it fails, it fails miserably. That is definitely not what you want in healthcare”
Steven Latré, Imec

As for privacy, there is a huge opportunity to combine different data sources and find hidden information in ways that can make a big difference. Blood results may be combined with exercise and stress-level information produced by a wearable device. This might also be combined with genetic data and other medical records. By combining all this information, personalised treatment can be improved. 

All this data is linked to an individual, and it sits in different places – at a doctor’s office, in the cloud, or on a wearable device. From a data analysis perspective, it would be nice to combine all this information into a centralised database. But from a privacy perspective, that would be a disaster. 

Imec is researching ways to use data without violating confidentiality. One area of research is on Solid, a term coined by Tim Berners-Lee, inventor of the World Wide Web. The Solid project is led by Berners-Lee at the Massachusetts Institute of Technology. 

“We have a lot of expertise in building this concept,” says Latré. “Solid is a personalised data vault where you, as a citizen, have access to all of your data. You are able to control who gets access to what, and you are also able to revoke that access again.

At Imec, we are focusing on protecting privacy on top of that, at the machine-learning level. You have personalised data vaults with data for yourself and maybe a database at the clinician and maybe at the genomic centre. The question we are trying to answer is: how can we build a machine learning system that is able to learn all these different patterns, but without centralising the data, which would make it vulnerable to data breaches?”

This concept is called federated learning. The machine-learning AI model travels from database to database, learning from each dataset without ever combining the data from different sources. Imec believes that the best approach is to combine federated learning with edge AI, which puts intelligence close to the sensor. 

Many challenges remain, but Latré is convinced that the most important work lies in taking the fruits of pure research to industry. “We have already had a fair number of breakthroughs from a theoretical perspective in the last five years,” he says. “Where I see breakthroughs in the near future is at the application level. This involves translating the theoretical breakthroughs into something that can be used by industry.”

Read more on IT innovation, research and development

CIO
Security
Networking
Data Center
Data Management
Close