Ascannio - stock.adobe.com

Could your employees’ use of ChatGPT put you in breach of GDPR?

Following Italy's run-in with OpenAI’s ChatGPT, legal expert Richard Forrest emphasises the necessity for additional scrutiny while using AI tools in a work environment, and practical guidance on doing so safely

Amid the recent news reports regarding ChatGPT, there are major concerns that a considerable proportion of the population lacks adequate knowledge of how generative AI, like ChatGPT, operates. This could potentially result in the unintentional disclosure of confidential information, and therefore a breach of GDPR.

The chatbot's potential in assisting business growth and efficiency has led to a surge in users from multiple sectors. Nonetheless, apprehensions have been raised following instances where employees carelessly submitted confidential corporate information, along with sensitive patient and client data, into the chatbot.

As such, it is important for businesses to adopt effective measures to ensure that employees in all fields, such as healthcare and education, remain compliant.

The urgency for businesses to implement compliance measures comes after a recent investigation by Cyberhaven revealed that sensitive data makes up 11% of what employees copy and paste into ChatGPT. In one instance, the investigation provided details of a medical practitioner who inputted private patient details into the chatbot, the repercussions of which are still unknown.

This gives rise to grave concerns regarding GDPR compliance and confidentiality.

The primary concern is centred around how large language models (LLMs), such as ChatGPT, use personal data for training, which could then be regurgitated later down the line. For instance, if a medical professional had entered confidential patient information, is there a possibility that ChatGPT could provide this data to another user in case of a query concerning said patient later on?

When it comes to use in a business context, there are similar worries. Businesses employing ChatGPT for administrative purposes may be jeopardising confidentiality agreements with clients, as employees might enter sensitive information into the chatbot. In the same vein, trade secrets, such as codes and business plans, may also be at risk of being compromised if entered into the chatbot, thus putting employees in potential violation of their contractual obligations.

My concerns became more apparent when news broke regarding the Italian data protection regulator’s ban on ChatGPT. Since then, the regulator has said it will lift the ban if ChatGPT complies with new measures. In a recent Yahoo! article, it said that the authority has requested the AI chatbot provide users with “the methods and logic” behind the data processing that goes on for the tool to operate.

What’s more, users and non-users should be given the tools “to request the correction of personal data inaccurately generated by the service or its deletion, if a correction is not possible [and] oppose ‘in a simple and accessible manner’ the processing of their personal data to run its algorithms”.

However, I am unsure whether these measures are enough, as they do not seem to address concerns over the regurgitation of learned data. In light of this, all businesses that use ChatGPT must adopt effective measures to ensure employees are remaining GDPR compliant.

By making it clear what constitutes as private and confidential data and outlining the legal consequences of sharing such sensitive information, you should be able to drastically reduce the risk of this data being leaked.

To assist with data compliance and security training, consider the following actionable tips: 

  • Assume that anything you enter could later be accessible in the public domain;
  • Don’t input software code or internal data;
  • Revise confidentiality agreements to include the use of AI;
  • Create an explicit clause in employee contracts;
  • Hold sufficient company training on the use of AI;
  • Create a company policy and an employee user guide;

A pan-European issue

Italy's experience with ChatGPT highlights the need for companies across Europe to prioritise compliance measures.

Since LLMs are still in their early stages of development, many individuals may lack a clear understanding of how they function, which increases the risk of inadvertently submitting private information. What’s more, the interfaces of LLMs may not always comply with GDPR, leading to blurred laws regarding liability if company or client data is compromised.

Read more about ChatGPT

ChatGPT has shown it can produce code. It can also identify bugs and even figure out what a code snippet is trying to do.

As countries grapple with regulating artificial intelligence tools such as ChatGPT, businesses should prepare for the likelihood of future regulations.

Whether in the hands of cyber criminals or oblivious end users, ChatGPT introduces new security risks.

Businesses that use ChatGPT without proper training and caution may unknowingly expose themselves to GDPR data breaches, resulting in significant fines, reputational damage, and legal action taken against them. As such, usage as a workplace tool without sufficient training and regulatory measures is ill-advised.

Currently, one of the biggest causes of data breaches in the UK across most sectors is human error. As AI is being utilised more frequently in the corporate sphere, it is important to make training a priority.

Businesses must now take action to ensure regulations are drawn up within their business, and to educate employees on how AI chatbots integrate and retrieve data. It is also imperative that the UK engages in discussions for the development of a pro-innovation approach to AI regulation."

Organisations and businesses are obligated by law to safeguard personal information and prevent unauthorised access by third parties. If they fail to meet this obligation and a breach occurs, affected individuals are entitled to seek compensation.

Richard Forrest is legal director at Hayes Connor, a specialist law firm covering data exposure resulting from data protection negligence, and data breach claims relating to breaches of privacy, identity theft and financial loss.

Read more on Privacy and data protection

CIO
Security
Networking
Data Center
Data Management
Close