LLM series - eSentire: Start secure, to avoid black clouds later

This is a guest post for the Computer Weekly Developer Network written by Jeff Schwartzentruber in his capacity as senior Machine Learning (ML) scientist at eSentire.

Schwartzentruber is part of the team that built eSentire’s own generative AI (gen-AI) service, which complements the organisation’s approach to system security and its specialism in multi-signal Managed Detection and Response (MDR) services.

Schwartzentruber writes as follows…

CEOs love generative AI – according to KPMG, 70 percent of companies are investing heavily in this technology to create new opportunities and they expect a return on their investment in three to five years. 

For developers, implementing new services that build on Large Language Models is a great opportunity to try new things and to develop their personal skills too. However, at this point, we should also consider security.

Security & LLMs

Security around LLMs covers a range of areas, from data sharing, privacy and security to accuracy of responses. For developers, considering these areas early can save a lot of time and effort later. Carrying out a threat modelling exercise can show where you might need to harden your systems, or where you have to consider your legal responsibilities around the data.

LLMs work based on prompts entered by users. Those prompts can potentially include company data or Personal Identifiable Information (PII) on customers, which would then be sent to the LLM for processing. What happens to that data after it is entered and is it managed effectively so it is kept secure? As an example, Samsung engineers used confidential data as part of ChatGPT sessions – that data was then returned in other sessions, leaking that data.

Jeff Schwartzentruber is part of the team that built eSentire’s own generative AI service,

While this hole has been filled, there are other ways to force an LLM to break its rules or provide additional data. Developers should conduct due diligence on any tools, so they understand the supply chain for the data they use.

For example, using the standard ChatGPT service by OpenAI allows OpenAI to re-use that data, but data sent via ChatGPT Enterprise or the API cannot be used to train OpenAI models. Similarly, if you use the Azure OpenAI service, then data is not shared onwards.

If you decide to implement your own LLM using an open source option, you will have more control over your model and how data is shared. At the same time, you can’t avoid standard security measures like encrypting traffic or ensuring that you have role-based access controls in place.

Security, from the start

One approach that can help get security right from the start is to implement an API proxy for your LLM. This provides you with a control point that can validate requests and log them in real-time. This also works the other way, as you can track the responses returned and their accuracy. 

With this in place, you can put a central point of control in place around data sent to LLMs, track any data that is shared and see the responses returned back. This can track any use of PII by your staff and thus influence your approach to governance, risk and compliance (GRC) training over time.

As generative AI evolves incredibly rapidly, implementing security may be the last thing on your mind. However, getting this in place from the start will make things easier in future. Having a security-by-design mindset will help you manage your deployment and ensure that you don’t have to bolt on security later when it is harder and more expensive. We have to manage and track how users will interact around LLMs, so that we can see that impact and where we can improve results too.

 

 

 

 

CIO
Security
Networking
Data Center
Data Management
Close