LLM series - Confluent: how to expand (and widen) LLM brain power

We tend to think of Large Language Models (LLMs) in the context of generative AI as perhaps some form of database or as some kind of an extension to new-age information management.

But, as Richard Cudd points out in his role as senior director for customer success engineering at Confluent, LLMs are in fact Machine learning (ML) algorithms trained on a colossal volume of data to recognise the structures and patterns of natural language. 

Typically deployed to tackle tasks like translation, question/answer generation and summarisation of text, an LLM is capable of generating entirely new text based on a simple prompt.

Cudd writes in full below to detail the use of LLMs today and explains how software application development professionals should be thinking about their role in modern applications….

A general-purpose brain

You can think of LLMs as the general purpose ‘brain’ that underlies general AI use cases, waiting to do your bidding. ChatGPT is the most obvious example; GPT is the underlying LLM in that platform.

These models are brilliant due to their breadth of knowledge. They’re capable of producing an incredible array of answers to a diverse set of questions. But their generality is often at the cost of superficiality; it’s the imitation of language and knowledge rather than a genuine, comprehensive understanding of a topic.

LLMs are stateless 

For example, LLMs are stateless – which is to say that they have no ability to remember the last thing you said.

They’re also trained on terabytes of public data. This both denies them access to private/enterprise data (like customer records) and also runs the risk of exposure to misinformation.

Improving that state of play requires changing the context of a prompt to factor in the specific data that an LLM would miss – which, in many cases, is proprietary data.

Company-specific data, product information, order history and so on, can all inform an LLM after an initial request has been made. If an organisation can both guarantee real-time access to data and establish a 360-degree view of that data, then that data can augment the output of an LLM as it’s generating a response.

Introducing vectors

Once that data has been made accessible, the right systems can turn that no-context prompt into a vector: a list of numbers that represents the semantic/underlying qualities of something.

A vector search will identify certain qualities and then pull the data that matches those qualities – regardless of whether that’s text, metadata, audio files, customer data, or certain images.

That data can then be used to reformat the prompt into one with the right context that specifically factors in this new information. With that beneficial context now provided and more specific knowledge to lean on, the LLM can provide a much more well-informed answer.

Ask the right questions

Confluent’s Cudd: Knowing what LLMs don’t know can help developers build new layers of knowledge power.

The natural language output of such a system essentially makes that huge pool of data interactive.

Both structured and unstructured data can now be interrogated not by complex processes, but by simply asking the right questions.

Unusually, this gives developers the opportunity to make a huge impact on stakeholders in businesses where tech literacy isn’t a given right across the board.

Consulting firms, healthcare and insurance companies are among the markets taking advantage of LLMs, allowing their stakeholders to develop a much greater understanding of the nuances of their business through everyday language. By asking simple questions, they can be much better informed about how their business works and apply that knowledge to future decisions.

Data democracy

It’s only through the developer that a tech stack can contextualise and refine the answers an LLM can give you.

The opportunity to be open, informed and democratic with access to that data is dependent upon their expertise.

Being able to understand so many different nuances of a business is just the start. From real-time payments to AI-driven healthcare, the terabytes of business data now accessible with a simple question can have a transformational effect. The more data you have, the more you know – and the more you know, the better your decisions.

CIO
Security
Networking
Data Center
Data Management
Close