Accountability for AI model misuse

The public and political backlash over the use of Grok’s AI engine to create explicit photographs, forcing the company’s hand into changing tack, shows just how out of touch the tech giants really are.

In January, following research that the Grok AI engine was being used to “undress” people, Emma Pickering, head of technology-facilitated abuse and economic empowerment at Refuge, charity which provides specialist support for women and children experiencing domestic violence, called for tech companies to be held accountable for implementing effective safeguards and preventing perpetrators from causing harm. “Legislation to criminalise creating, or requesting the creation of, non-consensual deepfake intimate images has progressed through Parliament, but we are still waiting for the law to come into effect,” she said.

While the sharing of real and synthetic intimate images without consent is illegal in the UK, she pointed out that in practice, the law is not being effectively enforced, with woefully low conviction rates.

This is something the joint committee on human rights has been looking at. The Information Commissioner’s Office (ICO), Ofcom and the European Human Rights Commission (EHRC) provided evidence to the committee on 4th February.

During the session, William Malcolm, executive director of regulatory risk & innovation at the ICO told the members of parliament at the committee meeting that there was a need to improve transparency.

As an example, Malcolm stated that the range of warnings presented when someone uses chatbots is very variable. With regards to AI models he also told the committee that transparency around model construction is variable. “I think the standard is right, but there’s more for organisations to do,” he said.

Data protection laws in the UK are technology-neutral. Committee members were told that these laws have proven adaptable to emerging technologies like AI. But as the ICO representative pointed out, while data protection laws impose strong accountability requirements, to ensure organisations balance individual rights with their operations, there is a risk that new technologies like AI reveal gaps and tensions in applying data protection principles.

One example is the right of UK citizens to have their personal data deleted from an AI model, which could potentially reduce the deepfake abuse that raised eyebrows earlier this year. But our data provides incredible value in reducing bias and inaccuracy. And at the time of writing, it seems the balance weights heavily in favour of the tech giants over individuals’ rights.

One wonders what’s going on in the minds of the people building and promoting these AI models. Some may purely be looking at the technological splendour of their inventions; others may look through the lens of financial rewards. But are they truly giving enough brainpower to how AI can and will be abused?