Why are we serving up chips?

Custom hardware is usually the only option available to organisations that need to achieve the ultimate level of performance for AI applications. But Nvidia has taken massive strides in flipping the unique selling point of its graphics processor units from the ultimate 2D and 3D rendering demanded by hard core gamers to the world of accelerated machine learning.

While it has been late to the game, Intel, has quickly built out a set of technologies from field programmable gate arrays (FPGA) to processor cores optimised for machine learning.

For the ultimate level of performance, creating a custom application specific integrated circuit (Asic), means that the microelectronics can be engineered to perform a given task with the least amount of latency.

Custom approach: Tensor processing unit

Google has been pioneering this approach for a number of years, using a custom chip called a TPU, as the basis for accelerating its Tensorflow open source machine learning platform.

Its TPU hardware tops the MLPerf v0.5 machine learning benchmarks of December 2018.

Beyond Asics, IBM is now investigating how, in certain, very specific application areas, quantum computing could be applied to accelerate supervised machine learning. It is actively looking to crowd source research that can identify which datasets are well suited to quantum computing accelerated machine learning.

Another option is the FPGA. Since it can be reprogrammed, an FPGA offers a cheaper alternative to an Asic. This is the reason why Microsoft is looking at using FPGAs in its Brainwave initiative for accelerating machine learning on the cloud.

GPUs rule mainstream ML

Nvidia has carved a niche for more mainstream AI acceleration using its GPU chips. According to a transcript of its Q4 2019 earnings call posted on the Seeking Alpha financial blogging site, the company believes deep learning offers a massive growth opportunity.

Nvidia CFO, Colette Kress, said that while deep learning and inference currently drives less than 10% of the company’s datacentre business, it represents a significant expansion of its addressable market opportunity going forward.

In a recent whitepaper, describing the benefits of GPUs, Nvidia stated that neural networks rely heavily on matrix math operations, and complex multi-layered networks require tremendous amounts of floating-point performance and bandwidth for both efficiency and speed. “GPUs have thousands of processing cores optimized for matrix math operations, providing tens to hundreds of TFLOPS of performance. GPUs are the obvious computing platform for deep neural network-based artificial intelligence and machine learning applications,” it claimed.

Opimising x86 CPUs

Intel’s chips are CPUs, optimsied for general purpose computing. However the company has begun to expand its Xeon processor with DL Boost (deep learning) capabilities. Intel claims this has been designed to optimises frameworks like TensorFlow, PyTorch, Caffe, MXNet and Paddle Paddle.

It hopes organisations will choose its CPUs over GPUs, because they generally fit in with what businesses already have. For instance, Siemens Heathineers, which is a pioneer in the use of AI for medical applications, decided to build its AI system around Intel technology, rather than GPUs. The healthcare technology provider stated: “Accelerators such as GPUs are often considered for AI workloads, but may add system and operational costs and complexity and prevent backward compatibility. Most systems deployed by Siemens Healthineers are already powered by Intel CPUs.” the company aims to use its existing Intel CPU-based infrastructure to run AI inference workloads.

So it seems developments in hardware is becoming increasingly important. Web giants and the leading tech firms are investing heavily in AI acceleration hardware.  At the recent T3CH conference in Madrid, Gustavo Alonso, systems group at the Department of Computer Science ETH Zürich, noted that AI and ML learning are expensive! “Training large models can cost hundreds of thousands of dollars per model. Access to specialised hardware and the ability to use it will be a competitive advantage,” he said in his presentation.

CIO
Security
Networking
Data Center
Data Management
Close