studiostoks - Fotolia

The legal quagmire of creativity in artificial intelligence

AI holds the promise to solve problems beyond the scope of human imagination. But if an AI can create, who owns its work?

Artificial intelligence (AI) is becoming an established alternative to human capabilities in computation, data-driven optimisation and manual labour. However, the latest models are also capable of that most human of qualities – creativity.

In one recent example, Rutgers University and Facebook’s AI lab teamed up to develop a painting algorithm which is creating new genres of art. This works with two neural nets bouncing ideas off each other; one creating, the other judging in remarkable mimicry of the human creative process. Meanwhile, film directors using classical music composed by an artificial intelligence visual artist (AIVA) and DeepMind have announced they are developing an AI with imagination.

As this new breed of creative robot continues to develop, there are significant legal issues to consider. Can an AI’s creation be protected by copyright and patents? Likewise, is an AI capable of infringing on someone else’s IP rights?

Can an AI’s work be legally protected?

This issue is multifaceted, as legal experts try to apply existing law to fast-evolving circumstances – something that does not always work. There are also differences in national legal systems, so technology companies need to take a global perspective to fully realise all the implications.

For example, under French copyright law, which was largely created to protect individual authors above all and is even recalcitrant to ownership of copyright by legal persons (as opposed to individuals), the standard of protection of copyright work is the imprint of the author’s personality. Clearly, authors must be individuals, and AI cannot hold copyright.

The solution is identical in the UK: the Copyright Designs and Patents Act 1988 sets out that in the case of a literary, dramatic, musical or artistic work which is computer-generated, “the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken”.

Similarly, the US Copyright Office will not register works created by animals or through the forces of nature. So, to be entitled to registration, a work must be authored by a human. As an AI doesn’t (yet, at least) have legal personhood, much like an animal, it requires a legal person, whether human or organisation, to take responsibility for its actions. Therefore, an AI cannot create copyrighted materials alone. 

Much has been made of the monkey selfie case (Naruto v. Slater), for which there was no precedent. British nature photographer David Slater argued he had a copyright claim on selfies taken by the endangered Celebes crested macaque monkeys in Indonesia, as he engineered the situation that resulted in the pictures.

The photographer lost the first instance case over permission to use the images, as the court ruled that no copyright existed as the pictures had not been taken by a human. This is an example of the way copyright has to be attached to a person. It wasn’t that the monkey held the copyright, rather copyright could not be attached to the images at all because they were taken without any human intervention.  

Another way to protect intellectual property is through patents – and here the law is even clearer. Patent law requires inventors to be individuals who contributed to the conception or conversion of a concept to a practicality.

Read more about AI ethics

For example, if an AI created an entirely new semiconductor chip, it could not be protected by patents unless some human intervention took place in the creative process, such as through the person who programmed the AI. Within the marketplace, it is now common practice for the owner, founder or head of the R&D department of the company who owns the AI to be named as the inventor of the product to deal with this eventuality.

Again, under current laws, an AI cannot own a patent. This raises the question of whether every AI’s work which is not co-created by a human is in the public domain by default. This isn’t quite the case as some legal grounds can be used to protect the creation of AI, especially around confidentiality and the protection of trade secrets, but it will take some time to get legislation to change to offer more substantial protection.

Can an AI infringe existing IP rights?

The time where every AI development needed human intervention to “coach” the machine to learn its processes is gone. Now, AI systems have been given the capability to modify their code, and there is a risk the resulting code can infringe on someone else’s rights.

This brings into question who is liable for that infringement. At present, the owner, developer, programmer or manufacturer of the AI is likely to be held ultimately responsible for its actions.

Promisingly, some legal areas relating to AI are showing signs of movement, including liability. In February 2017, MEPs asked the EU Commission to propose rules on AI to exploit their economic potential and guarantee standards of safety and security – establishing liability for accidents. There have also been debates on whether to establish personhood for AI.

If robots are granted full legal personhood status in the future, the situation would be quite different. If an AI with personhood status independently created a work of art that infringed on another’s rights, the usual infringement rules would apply. Suing a robot may sound like a far-flung science fiction fantasy, but legal personality and the ability to own assets go hand in hand, so it is theoretically possible if legal personality is granted.

However, an additional legal change would be needed to open up this possibility. IP laws as they stand do not recognise an AI’s right to invent a new piece of technology that can be patented, or create a work of art that can be copyrighted. The law as it stands still needs human intervention for creation to have been said to have taken place. This matters because so far IP legislation is moving at a much slower pace than liability issues.

These issues will become more prevalent as technology moves from soft AI (non-sentient artificial intelligence focused on one narrow task) to hard AI (artificial general intelligence with consciousness, sentience and mind).

Rights for robots

We may get to a point where AI is as smart as a human and requests the same rights as people – as dramatised in the late Isaac Asimov’s novels. In future, there might be a fourth law to Asimov’s Laws of Robotics: “Robots have a legal personality and are responsible for their actions”. But for now, we have some way to go.

Where we cannot apply existing law to new situations. Laws need to be created. Identifying a legal status for AI and sharing this status around the world would provide an answer to this exciting challenge.

This was last published in November 2017

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Artificial intelligence, automation and robotics

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close