Businesses are excited about artificial intelligence (AI) and the benefits it offers, but like so many new technologies, that potential can be wasted if you don’t know where to begin the journey to production-ready AI.
Having identified a business challenge that AI is able to solve, CIOs must consider the stages towards successful AI – which is, after all, only a technology and not an outcome in itself.
“The key is understanding what problem you’re trying to solve,” says Chris Feltham, industry technical specialist for cloud at Intel.
The early stages of any AI journey to production is all about the data to be used for AI – considering factors such as sourcing data, ingestion, cleansing and pre-processing.
Based on analysis of work with its customers, Intel has broken down the necessary stages of an AI project, and can show how much time should be spent on each stage from the beginning to the end of the journey where AI is in production and being used successfully.
“Organisations need to know what is most important to get right. Otherwise they risk being stuck in proof-of-concept paralysis,” says Feltham.
CIOs are often surprised that the computationally intensive training part of the process is often less time-consuming than the preparatory stages around data, which are more labour-intensive, or the inference stages where the AI is actively being used to make decisions.
Feltham cites the example of an oil company that wants to use submersibles with AI capability to spot visual signs of structural failure in rigs, and what that means in terms of data collection.
There are a huge range of considerations in the initial stages.; These include identifying what failure looks like from rust to cracks; how to train AI to know what damage looks like; where to get the data from for the training and if there a library of images of damage; and if there is no library, how do you obtain images?
The next step would be to determine whether the images are sufficient quality, and whether they are properly labelled.
“It is like trying to teach a child to read,” says Feltham. “You have to show AI, ‘this is rust, this is metal fatigue.’ It is very labour-intensive involving labelling by experts from the field in which you are trying to teach the AI to be an expert. However, it is vital work that must take place to make sure the data is of sufficient quality and in the right place before training. If you don’t do it right, you will fail.”
Experience with customer journeys
Ensuring the availability, quality, consistency and labelling of data is an essential part of an AI project. Intel’s experience with customer journeys shows how it is important to know what data you have and where it is coming from.
Feltham gives a further example of questions around AI-led predictive maintenance for machinery: “Are machines generating the data to indicate potential failure – for example vibration data? If not, do you need to put sensors on the machine? Is the data operated on at a factory level or sent to a different datacentre? Where are you getting the data from to train AI what potential failure looks like?”
These labour-intensive scenarios highlight the time that needs to be spent. In contrast, the compute-intensive model training part of the journey, with its emphasis on data analytics and modelling, is often less onerous. “Only 30% of the development cycle is computationally intense,” says Feltham.
He points out that if you have trained AI to a sufficient level of predictability, managing exceptions become easier too - for example, in a supermarket where AI is used for shelf stocking.
“AI can spot whether a bottle of wine is not on the right shelf. You can train AI on what you stock but as a retailer you might have to retrain the model to respond to seasonal trends. For example, at Christmas, you’d have to retrain the model to spot Christmas puddings. Once the model understands the labelling and the data and can operate to the level of accuracy you need, you don’t need to over-emphasise this part of the process,” he adds.
Deployment of technology comes towards the end of the journey - this is where Intel technologies can help in successful execution. Intel can provide different tools to deploy to a variety of hardware.
“AI is new to many companies, and it is very tempting to think you have to deploy a load of specialist equipment, but that is not always the case,” says Feltham.
“Many customers think you need accelerators, but you don’t necessarily need them. It might help you do the training quicker, but as discussed it’s quicker for just 30% of your development cycle. Since you can also do training on the infrastructure you already have, you need to decide if the additional investment in specialist equipment is worth it – particularly in the early stages of adoption.”
Intel’s approach helps customers understand they can deploy AI onto assets already in the field or the datacentre and they don’t need exotic, specialist equipment. Its commitment to continuous AI innovation ensures customers save money.
“With successive generations of Xeon processors, Intel will build-in increasing amounts of AI acceleration to ensure new equipment is optimised for AI alongside general purpose workloads” says Feltham, citing by way of example that Intel has increased image inference performance on Xeon Scalable Processors by 30 times since the launch in July 2017.
Intel software is open source and the company offers free tools and optimisation. For example, BigDL is a deep learning software library that is provided as a free download. It can be added to Apache Spark to enable deep learning applications and new capabilities on top of existing deployments. Customers choosing this approach include JD.com, World Bank, China UnionPay and Midea.
Intel provides references and best-known configurations for BigDL to simplify roll-out.
Feltham highlights how customers can benefit from being able to “write once for any architecture” with Intel tools such as Intel OpenVINO and One API, a unified programming model across diverse computing architectures.
“Intel tools enable you to deploy anywhere. You don’t have to ask people to develop six different ways for six different hardware platforms. You can develop an AI model once, use the tools and deploy anywhere,” he says.
By following the Intel guidelines on the journey to production AI, organisations can be geared up for success.