Ever since Charles Babbage conceived his ‘analytical engine’ back in 1837 the idea of computers that appear as smart as humans has been an aspirational, but elusive goal.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
But we are getting closer. Smart systems like IBM’s Watson, autonomous vehicles and a growing army of robots are quietly making more and more decisions every day, decisions that increasingly affect our lives.
While the benefits of speed, perfect recall, objectivity, repeatability characterise the machines (and differentiate them from the all too unreliable humans!) this steady advance raises some challenging questions, which have little to do with the technology.
If a machine makes a decision, what happens if it gets it wrong? Who is responsible? In our current legal environment someone (or some legal entity like a corporation) still bears the liability, even if the machine does the work (that’s one reason why there is still a pilot sitting up front in an aircraft), but we are now starting to face a real dilemma. Smart machines are close to outsmarting the humans – whether driving a car or determining a medical diagnosis – leaving the human overseer with the responsibility but reduced capability.
But, if we take the major step of changing the legal systems to give machines the responsibility for their own actions can they also expect rights? The right to power, to maintenance perhaps – directly emulate established human rights? And what if they turn nasty? There is already a heated debate over the ethics of smart machines in warfare amid calls for their use to be outlawed.
Over 70 years ago science fiction author Isaac Asimov proposed the three laws of robotics. Maybe now would be a good time to review and consider the future of smart machines. Science fiction is fast becoming science fact!