Weissblick - Fotolia
In a panel discussion at this week’s World Economic Forum in Davos, Joi Ito, director of MIT (Massachusetts Institute of Technology) Media Labs, warned that many computer scientists and AI (artificial intelligence) engineers generally come from similar backgrounds.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
This meant that AI systems could often be programmed in a way that their trainers see the world, leading to gaps in knowledge.
“I think that people who are very focused on computer science and AI do not like the messiness of the real world,” said Ito. “You do not get the traditional liberal arts types or philosophy types.”
According to Ito, this is because of the way people generally get into the computer industry. “If you look at the demographic of Silicon Valley, it is generally white men,” he said.
As an example of the risks this could present, Ito said: “One of our researchers, an African American woman, discovered that in the core libraries for face recognition, dark faces don’t show up. And these libraries are used in many of the products you have.”
This means that if an African American person were to stand in front of a facial recognition system, the AI would be unable to identify his or her face.
Ito suggested that a lack of diversity in the location where the AI was being built and tested had led to this oversight.
“One of the risks in the lack of diversity among the engineers is that it is non-intuitive about which questions you should be asking,” he said.
Get users involved
Even if the industry develops sound design guidelines, some of the decisions made when developing AI systems need to be done out of the laboratory, he said. “I think one of the things we need to do is that when people create the tools, they need to get users involved.”
Ito’s example highlights the importance of training AI systems with a wide set of scenarios.
Kitty Parr, founder and CEO of Social Media Compliance (SMC), a company that uses AI for image analysis, said at Davos: “AI informed intelligence software will always learn from current scenarios. It is only as good as the programmers.”
Ito also warned that users may not fully appreciate the power of the AI tool they are running. “AI is still a bespoke art with a super duper engineer who listens to the customer,” he said. Rather than focusing purely on the development of AI tools, Ito urged IBM, Microsoft and other companies building AI systems to get lawyers and ethicists interested in understanding the tools that are being developed.
Read more about AI
- Science adviser Mark Walport voices concerns around transparency, accountability and personal security concerning the use of artificial intelligence by the government.
- At this year’s CES, automotive companies revealed how they are working together to develop autonomous vehicles platforms.
Speaking about the potential benefits of this, fellow panellist Satya Nadella, CEO of Microsoft, said: “Key for me is the technology underneath, to make AI broadly accessible. Individuals with the right tools can make a difference.”
Nadella said industries were beginning to make use of AI. “A bank is able to give credit to the unbankable,” he said. “Rolls-Royce is creating an airline efficiency system and Volvo is using computer vision to prevent drivers being distracted.”
AI replaces and creates jobs
Judging by the rhetoric in his Twitter feed, Donald Trump’s presidency is likely to focus on maintaining traditional American jobs.
AI, like other technologies such as the use of robots in car production lines, results in job losses as manual jobs are replaced by machines.
The Harnessing Revolution: Creating the Future Workforce report from Accenture found that 87% of workers expect parts of their job to be automated in the next five years
According to IBM CEO Ginni Rometty, also on the panel in Davos, AI is not a job killer. Rather, she regards it as a key factor in revitalising traditional industries and jobs.
‘New collar’ jobs
In an open letter to Trump following his election victory, Rometty urged him to ctreate “new collar” jobs.
“Meaningful job creation is not about white-collar versus blue-collar jobs, but about the ‘new collar’ jobs that employers in many industries demand, but which remain largely unfilled,” she said. “As industries from manufacturing to agriculture are reshaped by the latest technologies, entirely new jobs are being created where AI is being deployed by major enterprises to augment and amplify human intelligence.”
Rometty’s comments are mirrored in Accenture’s research, which said AI had the potential to double annual economic growth rates and boost labour productivity by up to 40% by 2035.
But, as Computer Weekly has previously reported, success in AI will require new skills from employees.
A Vanson Bourne survey of 1,600 senior decision makers for Infosys reported said the skills of employees entrusted to lead the AI journey would be critical to its success. However, more than four in 10 do not feel they have the necessary development (42%), security (42%) or implementation (43%) skills required for AI use.
Also, despite the concerns of employees and customers, only a minority of decision makers believe their organisations have AI-related training (47%) or AI-related customer-facing (37%) skills.