Zluri co-founder on ChatGPT: Defeating developer-data dilemmas

This is a guest post for the Computer Weekly Developer Network written by Sethu Meenakshisundaram in his capacity as co-founder of Zluri, a SaaS management platform company that works to help organisations eliminate shadow IT.

Meenakshisundaram writes in full as follows…

Artificial Intelligence (AI) has cemented itself as the decade’s hottest technology trend and chatbots, known for their ease of use and general helpfulness, are one of the most widely-used forms of AI.

They are used extensively by companies for customer service, virtual assistants and other applications. 

ChatGPT is one such popular chatbot that has gained exponential popularity since its release in November; as many will already know, it uses an advanced AI algorithm to create human-like conversations based on prompts on a variety of subjects.

However, with the increasing use of ChatGPT, there has been a growing concern about privacy and security. Companies want to track ChatGPT users to ensure that their data is secure and that they are not misusing company resources.

So how does ChatGPT security differ from cybersecurity concerns to date?

Historically, IT departments have been able to localise cyber attacks quickly, secure the access point and data breaches’ reach. Security teams have visibility over vulnerable entry points and the ability to do damage control to prevent future issues.

New risk origins

But because ChatGPT is hosted on the public domain and isn’t centrally controlled, companies risk not being able to identify the origin of a cyber attack and respond quickly. It is unclear just how ChatGPT is using information input into its system to further train its models – leading enterprises to reconsider their ChatGPT policies and seek out ways to monitor which employees have access.

Why ChatGPT is a security concern for companies:

Confidential leaks – Using ChatGPT, companies must address the possibility of employees using company data to train ChatGPT models. Though it’s unlikely an employee is doing so maliciously, this is a serious concern as it opens up the door for confidential company data to inadvertently become public.

Developer code concerns

Already, quality assurance and engineering teams are using internal code to generate unit and integration test cases for automation from ChatGPT. 

This creates two major issues: 

Patented information becomes public which can lead to serious repercussions, including eliminating any product-based competitive advantage a business might have had if direct competitors get a hold of confidential product info. Secondly, it also compromises a firm’s go-to-market strategy if the public has access to a product in beta when it should only be the intended target audience.

Unauthorised access is another issue that companies are up against when integrating ChatGPT. Based on the way compliance policies are already written, employees using ChatGPT might not fall into an already-defined category, creating a grey area for IT.

For example: If an IT or business executive wants to automate their ticket resolution process using ChatGPT, they are required to share historical customer or internal resolution information with the system. The model now has access to sensitive information like customer intent, (root cause analysis) RCAs, tech infrastructure models etc. This is an access violation that may fly under the IT team’s radar teams until it’s too late.

There’s much to consider here, let’s move on.

Education & technology

Enterprises concerned about their security but eager to explore the possibilities of using ChatGPT to drive innovation should take the following into consideration:

Meenakshisundaram: Part of embracing ChatGPT and other generative AI technologies is educating cross-organisational teams appropriately

Education matters.

Part of embracing ChatGPT and other generative AI technologies is educating cross-organisational teams appropriately. Executive leadership should consider holding an all-hands town hall to familiarise teams with the vulnerabilities ChatGPT creates. But approach this positively and let employees know that the organisation is concurrently exploring the value of the technology for the long term. This can also be a great time to present updated internal policies.

Technology matters too. 

When considering bringing gen-AI into the fold, it is often necessary to use other technologies to protect against security threats. For example, a SaaS management tool can give IT teams visibility over which users have already signed up for Chat GPT using a company email address or on a company device. 

Widespread adoption of generative AI also increases the likelihood of phishing scams as potential dangerous emails sound more human-like and have fewer errors. An AI detection email integration can raise the flag to employees that an email may be unsafe before they share sensitive information.

Perhaps an organisation has concluded that there is a great possibility for innovation and increased efficiency using gen-AI, but isn’t ready to risk sharing confidential data without knowing how it’s used. Organisations can instead build their own models, trained on internal data to get the full functionality of ChatGPT without compromising information security.

We move onwards to the road ahead.

What’s next?

ChatGPT poses a significant threat to data privacy and security and it is essential for companies to track and manage its usage. 

But before swearing off generative AI, consider alternatives. Through company-wide education and retraining and with the right technology solutions onboard, ChatGPT and the like can be used to streamline workflows and elevate innovation inside the organisation.

 

 

CIO
Security
Networking
Data Center
Data Management
Close