This is a guest blogpost by Triveni Gandhi, Responsible AI Lead, Dataiku
Large Language Models (LLMs) including ChatGPT have seemingly moved from hype, to hot seat, to hazard. In the midst of the media flurry, it can be difficult to determine how to capitalise on the potential business opportunities Generative AI can offer, while using it at scale in a safe and structured way.
The ongoing warnings surrounding the threats of AI have understandably raised concerns. When Google’s Godfather of AI Geoffrey Hinton stepped down recently to warn the world of the dangers of the technology he helped to develop, many other businesses raised red flags. IBM’s Chief of Trust Christina Montgomery testified in the United States’ Congress in May to express that IBM is in favour of further regulation in AI using existing regulatory bodies, stating that AI could be harmful and requires safety interventions. Countless other high profile, well-informed individuals across technology and business have expressed similar concerns.
The reality is that the companies that are going to be most successful using LLMs to set themselves apart from the competition are not those that are jumping on the bandwagon at all. This isn’t about being first — it’s about taking the right approach for long-term benefit.
The companies that will prosper with LLMs are the ones that are doing it in a governed, risk-managed way. They are considering data and privacy concerns. They understand that — without an approach aligned with foundations in reliability, accountability, fairness, and transparency — not only will they neglect to experience the true value LLMs can introduce, they may also open themselves up to substantial risks.
So, how can companies use LLMs in a safe and governed way? It starts with a framework for the responsible development of AI, which can be applied to all uses of AI — not just Generative AI. However, there are LLM-specific steps that can be implemented alongside any Responsible AI framework. Here are a few tips on where to start and what to keep in mind.
A basic framework for Responsible AI
Responsible AI frameworks have been published by groups like OECD and the EU, covering a range of topics that need to be addressed in all AI systems. An entry point to these comprehensive standards would be a simplified framework focused on Reliability, Accountability, Fairness, and Transparency or RAFT (not to be confused with the consensus protocol). Each of the four values in RAFT provide high-level guidance on the key considerations for the safe deployment of AI systems.
- Reliability – AI development happens with consistency and reliability in mind across the entire lifecycle. Data and models are secure and privacy enhancing.
- Accountability – There is documentation for ownership over each aspect of the AI lifecycle. People use this documentation to support oversight and control mechanisms.
- Transparency – End users are aware when organisations are using AI. On top of that, the company provides explanations for the methods, parameters, and data used.
- Fairness – That is, the people building AI systems are thinking about minimising bias against individuals or groups. They are also thinking about supporting human determination and choice..
This framework can be used as a starting point for building safe and responsible AI in conjunction with the following steps.
Determine the exact use case in which the LLM should be deployed
There’s no getting around this: When it comes to using LLMs, use cases must be very well defined. This must include defining the company’s absolute ‘no-go’s’ for what an LLM should be used for. Beyond this, who should have access to the LLM? What will the integration with LLMs and day-to-day processes look like?
This may sound like an obvious first step, but along with determining and documenting the exact use case, companies should also be investigating their threshold for risk, including where in the business cycle the model will work.
Re-train open models on curated datasets specific to the use case and in a secure environment
When dealing with company-specific documents or unique industry-related content (such as medical journal publications), retraining models on curated datasets can ensure the resulting model returns results that are the most useful and relevant to the end user. Additionally, retraining the model in a secure environment means that sensitive or copyrighted data is not sent to an external model provider, ensuring the overall security of proprietary data.
Be transparent about when an LLM or AI system is providing information
Clear and direct language should indicate to end users that they are either interacting with an AI system (in the case of chatbots) or that the information they are receiving was produced by a model. This transparency is a bare minimum in the new age of Generative AI, and provides a baseline of trust for consumers. Additionally, transparency about how the model was trained (such as the type of data used in the training process) increases the trust in the output provided by a model. For more technical users, the full details of data and training pipelines could be made available.
Provide a feedback mechanism for end users to report harmful or incorrect information
Providing a feedback mechanism is essential. Users need to be able to report answers that are incorrect, toxic, or otherwise unhelpful. Collecting this feedback not only provides more agency to end users, it also helps developers further refine and augment models to be more performant and useful.
The considered path to complex AI systems
Though there is no lack of ready-to-use LLMs or other Generative AI tools, to get the most out of this technology, companies need to take a thoughtful and structured approach to latest developments. The suggestions listed here can help organisations get started on their path to more responsible and effective development of complex AI systems. While using a responsible framework for AI development may feel like a slow down in the ever-changing landscape, the real winners will be those companies that use a responsible approach to integrating LLMs into the enterprise.