There is little doubt that artificial intelligence and machine learning will revolutionise decision-making. But how these new technologies make decisions is a mystery and the black art that goes on behind the scenes to deliver those decisions is based on mathematical models that cannot easily be explained.
AI relies on accurate data, but data protection regulations can sometimes act as a barrier to prevent the access required to train algorithms with more diverse use cases. Without this diversity, the dataset is stymied by only including data from individuals who have opted in to sharing their personal information.
In June, the UK government began a consultation on the use of AI for technical innovation and creative works and is looking to amend UK copyright law.
The proposed changes to UK copyright law would mean anyone who has lawful access to material protected by copyright will be able to carry out data mining without further permission from the copyright owner. Such data mining could improve the accuracy of the data models used in machine learning.
But such changes would almost certainly be at odds with the EU’s GDPR, which organisations around the world have signed up, irrespective of where they operate.
Nonsense of national AI regulatory frameworks
The Department for Digital, Culture, Media and Sports (DCMS) has now published a paper exploring regulations of AI technology. The proposed governance framework covers the safety of AI, explainability and fairness of algorithms, the requirement to have a legal person to be responsible for AI and clarified routes to redress unfairness or to contest AI-based decisions.
All of these things make sense, to help to ensure AI algorithms are deployed responsibly. The government claims its proposed framework is pro-business and pro innovation, But, again, under the guise of post-Brexit Britain, this government wants to distance itself from existing EU regulations. As such its decentralised approach to AI regulations is unlikely to be compatible with the proposed more centralised EU AI Act. Businesses will be torn between complying with the UK or the EU proposals. As Tom Sharpe, AI Lawyer at Osborne Clarke points out, there is a practical risk for UK-based AI developers if, as is the case with GDPR, the EU’s AI Act becomes the ‘gold standard’. “To access the EU market, the UK AI industry will, in practice, need to comply with the EU Act,” he said.
The UK and EU proposals are unlikely to be the only governmental regulatory frameworks. Complying with multiple AI rules is certainly not pro-business. We need a global standard that works for everyone.