I am indebted to Slashdot.org for the link to this fascinating research by Purdue Research into something called “approximate computing”. For those of you, like myself, unaware of what “approximate computing” means it refers to the ability “to perform calculations that are good enough for certain tasks that don’t require perfect accuracy”.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
As Purdue Professor of Electrical and Computer Engineering Anand Raghunathan explains, computers were first designed to be “to be precise calculators that solved problems where they were expected to produce an exact numerical value”. But things are changing because the demand for computing today “is driven by very different applications”.
Professor Raghunathan argues that mobile and embedded devices are processing richer media and getting smarter by “understanding us, being more context-aware and having more natural user interfaces”. At the same time, there is an explosion in digital data searched, interpreted and mined by data centres.
So, essentially, what most people require in these circumstances is not a definitive answer because they’re not posing problems that can produce an exact numerical value and do not require a precise answer. In the words of Srimat Chakradhar, department head for Computing Systems Architecture at NEC Laboratories America, who collaborated with the Purdue team. “Here, you are looking for the best match since there is no golden answer, or you are trying to provide results that are of acceptable quality, but you are not trying to be perfect.”
In other words, approximate computing. Why would anyone bother with it? Because it saves power and is more efficient. According to the Purdue researchers, if you can develop computers that use approximate computing you can potentially double efficiency and reduce energy consumption.
It’s a point also made in an abstract entitled Computing, Approximately by Ravi Nair and Daniel A. Prenier of the IBM Thomas J. Watson Research Centre where the authors state: “It is clear that the preciseness of today’s computational model comes at a cost – a cost in the complexity of programming a solution, a cost in the verification of complex behaviour specification, and a cost in the energy expended beyond the minimum needed to solve the problem.”
They said approximate computing reflected the notion “that there are computational and energy efficiencies to be gained by relaxing some of the strict rules at all levels. We believe that approximate computing could represent the right model for most computational needs of the future – activities such as decision support, search, and data-mining are consuming increasingly greater cycles of enterprise computing compared to traditional activities like accounting and inventory control”.
Nair and Prenier said despite an unprecedented amount of data being produced in the world today, cost and energy considerations are limiting growth in the compute capability required to process and analyse the data. “It is likely that approximate computing will fulfil the processing needs of much of this new data being produced, especially by sensor networks. It is time for our community to begin designing new models for algorithms, architectures, and implementations to support approximate computing and help bridge this widening gap between the generation of data and the processing of this data in an energy-efficient manner”.
Which is something the Purdue team has started to achieve by showing how to design a programmable processor that can perform approximate computing.
“We have an actual hardware platform, a silicon chip that we've had fabricated, which is an approximate processor for recognition and data mining," Raghunathan said. "Approximate computing is far closer to reality than we thought even a few years ago.”
How close? He didn't say with any perfect accuracy, but that’s good enough for me. After all, I’m not a computer.