Innovation: The next big thing

Feature

Innovation: The next big thing

New Asset  
"All you can eat telecoms" and "smart yogurt" computers are just two projects that could revolutionise everyday life in coming decades. Danny Bradbury looks at some of the challenges facing those pushing the frontiers of information technology

 

 

 

A year may be a long time in computing, but what about 10 years? Or 20? To try to understand where technology is headed, we have to identify the challenges facing the sector in the coming decades.

The British Computer Society recently published the results of a GrandChallenges conference which was held in 2004. The Grand Challenges workshop, sponsored by the UK Computing Research Committee, was judged by a BCS expert panel with support from the Engineering and Physical Sciences ResearchCouncil.

Grand Challenges is the name given by the scientific community to long-term initiatives involving separate projects which may need international or national co-operation.

Submissions from the academic community were solicited and none were turned down. The submissions were aggregated into broad research categories that became definitions for computing research - the milestones that researchers would try to reach during the next few decades.

Tony Hoare, senior researcher at Microsoft's Cambridge research centre who co-edited the BCS' Grand Challenges in Computing Research report says, "The novelty of the whole exercise is scientists taking responsibility for the future of their own discipline. Things are so strongly driven by the enormous success of computer applications, we sometimes forget it is a subject with a central core of scientific interest independent of application."

Ubiquitous computing

Of the seven proposals, two, science for global ubiquitous computing and scalable ubiquitous computing systems, related to distributed systems to support the design of networks that can expand and handle a growing number of devices.

Although one strand emphasises the theory and another examines engineering, both address a broadly similar goal: the idea that in the future, sensors with computing capabilities will be so cheap, small and abundant, it will be pointless to think of them as individual units.

Instead, they will need to be addressed as a single, highly distributed entity with millions of discrete agents embedded everywhere, including vehicles or buildings - perhaps even your clothes. In future, your T-shirt may monitor your heartbeat and vital signs and call your doctor if you get into difficulties.

This is something companies such as BT have already accepted. Ian Pearson, futurologist at BT's research arm, BT Exact, says ubiquitous computing will make Wi-Fi hotspots a thing of the past and render communications free.

A blanket network of sensors, each costing less than a penny and drawing their power from the environment, would be able to relay calls anywhere. This would create a different telecommunications industry to the one that exists now, he says.

"The costs will come down, moving away from the point where we will charge for individual calls and going to an 'all you can eat' model." You will buy your connectivity, security and value-added services and the calls will essentially be free.

Journeys in non-classical computation

One of the biggest challenges for ubiquitous computing is the idea of non-sequential computing. Having lots of agents all making autonomous decisions represents a drastic departure from the sequential computing model, first introduced by John von Neumann in 1945. This is one the areas considered in another Grand Challenge category, which is designed to cope with new methods of computing - journeys in non-classical computation.

This category takes into account various alternatives to classical von Neumann processing. It addresses the problem from the bottom up, examining how non-classical philosophy and physics can affect future computing.

Quantum computing is a case in point. Relying on the superposition principle that allows particles to be in two states at once, quantum computing lets researchers theoretically put all of a computer's bits in all positions, meaning they can hold the numbers for all calculations at once.

Although quantum computing is still in the lab, the use of quantum science to exchange security keys is more advanced. "What interests me is that some of these things are much closer to applications than others," says Martin Illsley, director of the European research and development team at Accenture.

"In quantum cryptography, we are already seeing some early pilots and we are integrating that into our security practice," he says. id Quantique of Geneva will sell you a quantum cryptography key distribution system for commercial use, and security specialist Qinetiq has been transmitting quantum keys between mountain tops in Germany.

In vivo-in silico (iViS)

But quantum computing is not the only area of interest to non-classical computing researchers. Biological systems will provide it with much inspiration, says the BCS report, because living organisms have much to teach us about non-sequential, autonomous processing (consider, for example, how individual cells know what they should be doing without any central control).

Genetic algorithms and neurology will be an important part of this challenge, as will artificial immune systems. The Royal Mail has already trialed the latter as a means of automatically detecting fraud at its branches.

Pearson believes bio-inspired computing could lead to computers that grow themselves. He foresees a computer in 2015 that will look more like a pot of yoghurt than a metal box.

"It would have a suspension of particles in it, each of which is a cluster of neurons," he says of first generation "smart yoghurt" devices. "They will be mostly analogue and will communicate photonically with each other rather than using hard wiring."

The second generation would use self-assembling DNA to create protein clusters where the logic would be inside the cells. The cells could then be programmed to grow themselves, he says.

It is no surprise that biology fits so well into the future of IT, says Wendy Hall, professor of computer science at Southampton University and BCS president from 2003-04.

"We are looking at new degrees where it is not just computer science-inspired biology or biology-inspired computing, it is a new type of person who understands how to build complex systems. To do that, they have to be both a computer scientist and a biologist," she says.

One of the grand challenges defined in the BCS report is in vivo-in silico (iViS): the virtual worm, weed and bug, which focuses on trying to simulate basic organic systems. Initially the challenge would be restricted to simple life forms such as the nematode worm, the weed Arabidopsis and single-cell organisms such bakers' yeast: hence "the virtual worm, weed and bug".

However, creating computer simulations of, say, the various developmental stages of an earthworm that let you zoom from a cellular to a whole body level at any point in time is a huge undertaking because of the complexity of even a simple organic form.

Dependable systems evolution

However, David Patterson, president of the Association of Computing Machinery in the US, is eager not to let biology overshadow IT. "IT is going to be the foundation of all research. We have this ability to simulate rather than perform physical experiments and that is a pretty powerful notion," he says.

Patterson was on the organising committee for a US-based Grand Challenges programme organised by the Computing Research Association (CRC) in 2002 (the same year the UK CRC began its project). Patterson says the US organisation's goals were more short term than those of the UK project. But the consensus reached by the two separate exercises is significant.

Just like the UK CRC, the US branch identified ubiquitous sensor-driven networks as a Grand Challenge, although, in keeping with its shorter term objectives, it tied it to an application, positing its use for sensing, warning about and helping to recover from natural disasters.

Similarly, Systems You Can Count On, another of the five grand challenge groups identified in the US project, maps to another of the UK grand challenge categories: the less snappily-named Dependable Systems Evolution.

The US category focuses on systems to eliminate cyberterrorism, user error and data loss, and was expanded on in a subsequent Grand Challenges workshop organised by the Association for Computing Machinary and the CRA, focusing on cybersecurity. It resonates with the UK's Dependable Systems category, which focuses on verifiable systems.

Verifiable systems, which can mathematically prove their reliability and security before they run, have been a holy grail for the IT community since at least the 1960s.

As systems become more complex, this becomes harder to do - Microsoft evidently has not been able to mathematically prove the security of the millions of lines of code in Windows, for example, or patches would be a thing of the past. As we start building the vastly distributed ubiquitous computing systems as proposed in other categories, that job will become harder still.

Certain about uncertainty

Eugene Spafford, co-chairman of the ACM's US Public Policy Committee and a member of the US President's Information Technology Advisory Committee, does not harbour much hope for an absolutely provable system, but offers an alternative.

"The very formal definitive closed metric is likely well beyond anything we can come up with, if it is even possible to do. Certainly with humans involved, there is a random element we may never predict. But there are many fields when appropriate stochastic measures let us make risk decisions," he says. 

The security and reliability of future software may not be verified in terms of absolutes, but in terms of risk. The idea of defining a mean time between failure for a software application in the same way as for a hard drive may not be so far-fetched.

Bayesian statistics are one technique for producing a probability based on a series of highly complex, interconnected factors. You may not be able to verify a machine as completely infallible, but at least you can say how certain you are it will not crash.

This lack of certainty will become the underlying principle for the future of computing. As we move into areas of high complexity, emulating biological systems in which huge populations of agents act autonomously and in unison, we may have to throw up our hands and accept that we do not know anything for sure.

In 1927, one of the early gurus of quantum physics, Werner Heisenburg, defined a rule for this problem when describing the state of subatomic particles, "The more precisely the position is determined, the less precisely the momentum is known in this instant, and vice versa."

As we rush to meet tomorrow's Grand Challenges, we may know where we are, but we will not be certain how much further we have to travel until we get there - or even when we have arrived.

The Grand Challenges
  • Science for global ubiquitous computing
  • Scalable ubiquitous computing systems
  • Journeys in non-classical computation
  • In vivo-in silico (iViS): the virtual worm, weed and bug
  • Dependable systems evolution
  • Memories for life: managing information over a human lifetime
  • The architecture of brain and mind.

Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in March 2005

 

COMMENTS powered by Disqus  //  Commenting policy