NicoElNino - Fotolia
In late January this year, the French government announced it would start putting together a strategy for artificial intelligence (AI). For a country with an abundance of mathematicians and scientists highly skilled in many of the fundamental techniques underlying AI, this seemed a little late in coming.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The country’s relatively slow adoption is partly the result of public confusion, and partly due to the way the French have approached the ethical debate around AI. French psychiatrist and member of l’Académie des Technologies, Serge Tisson thinks it would help if the general public had a more accurate understanding of what AI is.
Tisseron, who authored a book on the subject, Le jour où mon robot m’aimera – vers l’empathie artificielle, said: “People think right away that AI systems are as powerful as the human mind. We talk about artificial empathy, neural networks, and autonomous machines – all references to human traits. This tendency towards anthropomorphism misleads the general public. And now the European parliament is even talking about electronic persons.”
Indeed, a draft resolution granting AI systems the legal status of “electronic persons” is now being considered by EU parliament. One of the reasons for granting machines a legal status is so they might be counted and assigned to human owners, who could then pay taxes and be held liable for the actions of his or her cyber property.
But taxes and liability are just two of the issues to be addressed. The debate also extends to how an increasing dependency on AI systems might affect the way we interact with the world. For example, Tisseron said the French want to see that they’re talking with a robot. They want to know when they’re chatting with an AI system on the internet. They also want to be given the choice to communicate with a person.
Gartner research fellow Frank Buytendijk, who specialises in digital ethics, provides a different view. He argued there are advantages to making robots that look and act as human as possible. “This would help us by driving more civilised and caring behavior towards computer systems. If we spend all day talking coldly to robots, how are we going to talk to our children in the evening?”
The French discuss some of these broader issues around AI ethics, but they’re also particularly sensitive to certain specific consequences of AI. “While people in all countries are afraid of losing jobs to artificial intelligence, the French don’t think they’ll ever find another job. The French aren’t dynamic in that way. Americans are,” said Tisseron.
Read more about artificial intelligence
- Government digital strategy’s plans to boost the artificial intelligence industry include a £17m fund to support new technological developments.
- Data61 CEO Adrian Turner says Australia needs to reskill workers rather than implement a robot tax.
- The belief that advances in technology will create more jobs than they replace may no longer hold true.
But Buytendijk, who looks at AI ethics from a worldwide perspective, is not so worried about the job loss. “We’ve already gone through a similar transition when we moved away from agriculture.”
It’s interesting to contrast France’s approach towards the dangers of artificial intelligence with its approach to nuclear energy. The French have embraced nuclear energy more than any other country in the world, and they’ve done so with relatively little public debate to slow them down.
Exploring the ethics of AI
But when it comes to the ethics of AI, France tends to engage a broad group of players in a prolonged debate on what society stands to gain and lose with smart machines. In stark contrast to the recent partnership formed by Google, Facebook, Amazon, IBM and Microsoft to explore ethical best practices around AI, France has chosen to engage in a wider discussion by including philosophers, theologians, psychiatrists and other people motivated more by social justice than by profit and world dominance.
“Letting these large companies control the discussion around the ethics of AI is like letting car manufacturers control the decision-making process around automobile safety and anti-pollution measures,” said Tisseron. “A principle we try to adhere to in France is that you can’t be a major player as well as the judge.”
Another fundamental issue in some countries is that AI systems need access to data that may be considered private. Machine learning requires lots of information. But this need for access to data clashes with data privacy issues in Europe, and especially in France, where data privacy was legislated all the way back in 1978.
Not only does it take a lot of data, but it also takes a lot of time for an algorithm to get it right. Take, for example, the Tay case. Microsoft’s Twitter bot was designed to learn from other peoples’ tweets and then generate it’s own messages. It didn’t take long for hackers to figure out that they could feed Tay racist and sexist tweets to get it to produce its own offensive messages.
This event was embarrassing to Microsoft, who immediately took the system down, but Buytendijk applauded Microsoft for having run the experiment. “After all,” he said, “nobody was hurt and we all learned from it. In this case, we all said very quickly that something had gone wrong. When the results are less obvious, the damage can be greater. The trouble is, we may not know it when the data is bad.”
Half of France still fears AI
When all is said and done, few developed countries are more concerned about the dangers of AI than the French. A 2016 survey financed in part by Microsoft and conducted by Odoxa suggests that half of the French population is downright scared of artificial intelligence. A nation with the potential to make a huge contribution to the latest revolution in information technology stands frozen by indecision – and as a result, may miss out on many of the benefits.
Tisseron said there are two things France should do to ease the introduction of AI. The first is to legislate against false advertisement – ads that lead people to believe AI systems have emotions and are autonomous. “Ads that imply robots have emotions cause people to forget that the robots are simply machines that are programmed by software engineers,” he said. “AI systems are not autonomous. They run a set of algorithms they are designed to run, and that’s all.”
“The second thing to do,” said Tisseron, “is to educate children that AI is both wonderful and dangerous. We didn’t do this for nuclear energy. We accepted nuclear energy, full stop. But for AI, we should talk about both the huge benefits and grave dangers. Our children need to learn this kind of complex thinking about the coexistence of huge benefits and grave dangers.”