Getty Images/iStockphoto

How large language models address enterprise IT

It’s being pitched as Microsoft versus Google, but the large language models from these two giants are likely to revolutionise IT usability

Microsoft’s recent Copilot product enhancement in Office 365 shows how a new generation of artificial intelligence (AI) capabilities is being embedded into business processes. Similarly, Google has begun previewing application programming interfaces (APIs) to access to its own generative AI, via Google Cloud and Google Workplace.

The recent Firefly announcement of text-to-image generative AI from Adobe also demonstrates how the industry is moving beyond gimmicky demonstrations used to showcase these systems into technology that has the potential to solve business problems.

Microsoft 365 Copilot uses large language models with business data and Microsoft 365 apps to boost the Microsoft office productivity suite with an AI-based assistant that helps users work more effectively. For instance, in Word, it writes, edits and summarises documents; in PowerPoint, it supports the creative process by turning ideas into a presentation through natural language commands; and in Outlook, it helps people manage their inbox. Copilot in Teams sits behind online meetings, making summaries of the conversation and presenting action points.

Adobe has launched the initial version of its generative AI for image-making, trained using Adobe Stock images, open content and public domain content where copyright has expired. Rather than trying to make digital artists, designers and photographers redundant, Adobe has chosen to train its Firefly system, based on human-generated images, which is focused on generating content based on images and text effects, which, according to Adobe, is safe for commercial use.

Google has put into beta its large language model, Bard, and embedded two generative AI models, PaLM API and MakerSuite, in Google Cloud and Google Workspace. Introducing the new development, Google CEO Sundar Pichai wrote in a blog post: “Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses. Bard can be an outlet for creativity, and a launchpad for curiosity.”

While there are numerous online comparisons of the two rival large language models, ChatGPT is regarded as an older technology, but Microsoft’s $10bn investment in OpenAI, the developer of ChatGPT, represents a very public commitment to it.

Speaking on the BBC’s Today programme, Michael Woodridge, director of foundational AI research at the Turing Institute, said: “Google has got technology which, roughly speaking, is just as good as OpenAI. The difference is that OpenAI and Microsoft have got a year’s head start in the market, and that’s a year’s head start in the AI space, where things move so ridiculously quickly.”

Read more about large language models

  • Jurassic-2 is the latest model from AI21 Labs. The model is the successor to Jurassic-1, which was trained on 178 billion parameters, three billion parameters more than GPT-3.
  • OpenAI has introduced its latest large language model. With multimodal capabilities, it's expected to perform better while still suffering from some of the same problems.

In a recent blog discussing the speed with which OpenAI’s ChatGPT has developed into a model that seemingly understands human speech, Microsoft co-founder Bill Gates said: “Any new technology that’s so disruptive is bound to make people uneasy, and that’s certainly true with artificial intelligence. I understand why – it raises hard questions about the workforce, the legal system, privacy, bias and more. AIs also make factual mistakes and experience hallucinations.”

For Gates, AI like ChatGPT offers a way for businesses to automate many of the manual tasks office workers need to do as part of their day-to-day job.

“Although humans are still better than GPT at a lot of things, there are many jobs where these capabilities are not used much,” he said.

“For example, many of the tasks done by a person in sales (digital or phone), service, or document handling (like payables, accounting or insurance claim disputes) require decision-making, but not the ability to learn continuously. Corporations have training programs for these activities, and in most cases, they have a lot of examples of good and bad work. Humans are trained using these data sets, and soon, these data sets will also be used to train AIs that will empower people to do this work more efficiently.”

Uses for business

Discussing the use of generative AI and large language models in business, Rowan Curran, an analyst at Forrester, said: “The big development here is that these large language models essentially give us a way to interact with digital systems in a very flexible and dynamic way.” This, he said, has not been available to a large swathe of users in the past, and they give users the ability to interact with data in a “more naturalistic way”.

Regulators are keen to understand the implications of this technology. The US Federal Trade Commission (FTC), for instance, recently posted an advisory regarding generative AI tools like ChatGPT. In the post, the regulator said: “Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals. They can use chatbots to generate spear-phishing emails, fake websites, fake posts, fake profiles and fake consumer reviews, or to help create malware, ransomware, and prompt injection attacks. They can use deep fakes and voice clones to facilitate imposter scams, extortion and financial fraud. And that’s very much a non-exhaustive list.”

The FTC Act covering prohibition on deceptive or unfair conduct can apply if an organisation makes, sells or uses a tool that is designed to deceive – even if that’s not its intended or sole purpose.

Curran said the technology used in these new AI systems is opaque to human understanding. “It’s not actually possible to look inside the model and find out why it’s stringing a sequence of words together in a particular way,” he said.

They are also prone to stringing together words to make phrases which, while syntactically correct, are nonsensical. This phenomenon is often described as hallucination. Given the limitations of the technology, Curran said it will be necessary for human curators to check the results from these systems to minimise errors.

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close