NicoElNino - Fotolia

What’s the smart way of moving forward with artificial intelligence?

Don’t try to use AI to solve everything at once – be clear about what specific use cases you want

Artificial intelligence (AI) and in particular generative AI has the potential to be transformative. Following on from the mass cloud adoption of the last fifteen years it’s the next evolution of how we use technology. How can organisations operationalise it to deliver business benefits?

It’s a question that’s very much on the minds of technology leaders. The 2023 Nash Squared Digital Leadership Report, which takes in the views of over 2,100 technology leaders around the world, finds that seven in 10 tech leaders believe the benefits of AI outweigh the risks – but only 15% of them feel prepared for the demands of generative AI.

Only two in 10 have an AI policy in place and more than a third (36%) have no plans to even attempt one at this time. As our report reflects, there is “excitement, confusion and concern in apparently equal measure.” For probably the first time in my career, people are genuinely having conversations around, “Just because we can, should we?” AI is raising a whole new set of questions and debates.

The report also observes that, while large-scale AI implementations have been limited to date (to only 10% of organisations), we are reaching a tipping point now due to the growing popularity of generative AI.

It’s something I’m seeing everywhere: companies are almost all asking themselves what an adoption of AI and specifically generative AI could look like for them. What are the likely productivity benefits, what are the risks, and what are the costs?

Cloud parallels

I have mentioned cloud already – and in many ways, the point we have reached is reminiscent of the early days of cloud and software as a a service (SaaS). Then, many CIOs and digital leaders were nervous about the move, fearing the rise of "shadow IT" and a loss of control within the organisation. But the most successful leaders realised that this was something that couldn’t be held back – they needed to embrace it and manage it by leading the evolution rather than attempting to micro-manage it.

The same applies here. In fact, it applies even more. Because whereas there was a degree of optionality over cloud and SaaS – it was essentially up to the CIO whether and when the organisation moved to it – with generative AI there isn’t that same element of choice: staff in the business will start using it (and already are). Realistically, there is nothing the CIO can do about that.

Education and guidance for staff

That’s why supportive guidance and policies for staff are essential because there are some obvious risks to generative AI. At a basic level, these include:

  • Copying and pasting commercially sensitive information
  • Using AI for research but not being aware the results may be flawed
  • Using AI to create content, but not being aware of the potential copyright issues
  • Overestimating the usefulness of AI – at the moment, most people can detect whether an article or other content has been written mostly by AI

Data privacy and confidentiality is a particular issue – it is the second highest concern in our Digital Leadership Report (36% of tech leaders), ranking only behind the need for effective regulation (42%). While the hesitancy about creating AI policies is understandable in such a new field, it needs to be overcome as soon as possible. It is better to have an imperfect policy that you commit to update than no policy at all. Basic protocols need to be clear and understood. Staff need to be supported to make good decisions. Alongside this, businesses should be supporting their staff by bolstering AI literacy – holding training and awareness sessions, discussion forums, online training resources etc.

Building the approach

Our research finds that nearly half of organisations have some form of AI implementation or pilot in play. When it comes to generative AI, that figure is around a third. My advice for them to make this a success – and for other businesses that have not yet started – is to remember a few simple key principles.

First, remember that AI does not fall under the sole ownership of the IT function – so create a multi-disciplinary team to look at it involving other key stakeholders such as HR, finance, legal and marketing. Consider also giving overall responsibility for AI to one person in the business leadership team as part of their role. This will provide more clarity around accountability. In many businesses, responsibility for AI is quite amorphous at present. Having an AI leader will also help to take it out of boardroom or executive committee theoretical discussion and move it into a more practical, action-oriented domain.

Don’t try to use AI to solve everything at once – be clear about what specific use cases you want to employ it for. This could be any of a myriad of things including:

  • To automate a specific process to make it faster and more efficient
  • To help staff produce content more easily, such as reports, articles, reviews, presentations, meeting summaries or document templates
  • To research different models and designs of certain products and services
  • To find specific facts or pieces of information from a very large data field
  • To improve the external customer experience such as through help functions or predictive capabilities, anticipating their needs

Identify the areas that have the highest potential to add value and focus on those. We did just this in our business by assembling a multi-skilled team to create an intelligent chatbot called “BonBon”. Using OpenAI’s technology, this chatbot is now allowing our clients to automate tasks with a human-like interaction, such as onboarding new employees or responding to customer queries.

It is also highly advisable to consider working with an external, independent technology consultancy who can give you objective advice and guidance. In such a new area, this is a time for consultancy to step up.

Be clear on your ambition

Finally, be clear on what your corporate ambition is. Do you want to be an early adopter, leading the way and creating a competitive advantage? Or a fast follower, with lower risk and potentially lower cost? Or are you content to move much more slowly – what some would describe as a "laggard" – minimising risk and waiting until the technology and the use cases are more widely available and their robustness is proven?

For some, early adoption makes powerful sense – such as businesses with large numbers of people using technology to do the same things, like a service centre or customer service operation. For these businesses, cost savings may be the primary driver. Fast followers are likely to be businesses that see the opportunity to drive value creation by harnessing AI to free up people to focus on more value-adding tasks.

Wherever you sit on the spectrum, AI is going to have a massive effect. There is some hype of course, but this will right-size itself over time. We normally tend to overestimate the impact of new technology in the short term and underestimate it in the medium to long term. The goal must be to harness AI, under the decision-making control of human beings, for genuine improvements in efficiency, performance and outcomes – we want it to be omnipresent but not omnipotent. That’s the balance we need to collectively strive for.

George Lynch is head of technology advisory at NashTech, part of Nash Squared

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close