Who is Danny - stock.adobe.com

IBM says all the building blocks for analogue AI are in place

Analogue electronics promises a more efficient way to train AI systems than computers that rely on zeros and ones

IBM Research has published a paper discussing a breakthrough in the use of analogue computing for artificial intelligence (AI) calculations.

When building AI systems, the data model needs to be trained. This is where weightings are given to different subsets of the training data, such as image data that depicts distinct features of a cat, for example.

When AI systems are being trained on a traditional (digital) computer, the AI model is stored in discrete memory locations. Computational tasks require constantly shuffling data between the memory and processing units. IBM said this process slows down computation and limits the maximum achievable energy efficiency.

Using analogue computing for AI potentially offers a more efficient way to achieve the same results as AI run on a digital computer. Big Blue defines analogue in-memory computing, or analogue AI, as a technique that borrows key features of how neural networks run in biological brains. Its researchers said that in human brains, and those of many other animals, the strength of synapses, known as weights, determines communication between neurons.

In analogue AI systems, IBM said these synaptic weights are stored locally in the conductance values of nanoscale resistive memory devices such as phase-change memory (PCM). They are then used to run multiply-accumulate (MAC) operations, in deep neural networks.

According to IBM, the technique mitigates the need to send data constantly between memory and processor.

In a paper published in Nature Electronics, IBM Research introduced a mixed-signal analogue AI chip for running a variety of deep neural network (DNN) inference tasks. According to IBM, this is the first analogue chip that has been tested to be as adept at computer vision AI tasks as digital counterparts, while being considerably more energy efficient.

The chip was fabricated in IBM’s Albany NanoTech Complex. It is built on 64 analogue in-memory compute cores (or tiles), each of which contains a 256-by-256 crossbar array of synaptic unit cells. IBM said time-based analogue-to-digital converters are integrated in each tile to transition between analogue and digital data. Each tile is also integrated with lightweight digital processing units, which IBM said perform non-linear neuronal activation functions and scaling operations.

According to IBM, each of these tiles can perform the computations associated with a layer of a DNN model. “Using the chip, we performed the most comprehensive study of compute precision of analogue in-memory computing and demonstrated an accuracy of 92.81% on the CIFAR-10 image dataset,” the authors of the paper said.

Read more about AI training

Read more on Chips and processor hardware