Explaining quantum empirically

Last month IBM introduced the next machine on its quantum computing roadmap. Among the significant implications of larger quantum machines is that it will soon be no longer possible to simulate the behaviour of quantum algorithms on a classical computer architecture.

For anyone who is very concerned about the black magic going on inside deep learning artificial intelligence algorithms, this has exponentially deeper and darker implications. A quantum computer can solve problems that no one will have enough brain power to understand. And if it can’t be understood, how do we know whether the algorithms it runs are correct, unbiased and ethical?

In traditional computer programming, a software developer can run the code line by line using a debugger tool, which executes one computer instruction at a time. This provides a way to investigate if the code is doing what it is supposed to do. But, as IBM’s chief quantum exponent Bob Sutor, notes, such debugging isn’t possible on a quantum computer. Explaining the debugging challenge, he says: “While you do operations on a single qubit, you can’t stop and look at its value.” This, he says, is because a qubit relies on quantum mechanics, which mean you can’t debug in the same way as a classical computer.

Wet chemistry

Given that a quantum computer uses quantum mechanics, among the target use cases for quantum computing is simulating a chemical reaction. This is regarded as a good fit, since quantum mechanics is also the science behind chemical reactions. Verifying that a quantum simulation of a chemical reaction is correct is then a matter of running the real chemical reaction to see if it behaves as the simulation predicted.

What about things that cannot be verified or explained through observation of a chemistry experiment in a laboratory? Computer Weekly recently spoke to a number of experts who have been working on ways to make quantum algorithms verifiable and explainable. At one level, it may be possible to develop empirical building blocks, each of which represents a mathematical primitive.

These building blocks can be verified either on a classical computing architecture or a smaller quantum computer. The general idea is to use a compositional tool to break down big problems into small parts based on these primitives and empirical building blocks. Bob Coecke, theoretical physicist and chief scientist at Cambridge Quantum Computing believes that by representing these building blocks diagrammatically, almost anyone can understand quantum processes.

The conversations surrounding AI are a precursor to what we can expect as quantum computing takes hold. Let’s hope enough people are thinking about this now, before it’s too late.

CIO
Security
Networking
Data Center
Data Management
Close