Myst - stock.adobe.com

High data quality key to reducing supply chain disruption

What could be the role of data quality enhancement in reducing the supply chain disruption that has beset the UK economy because of Brexit, Covid, and war in Europe?

Those in the supply chain discipline have always known how much is at stake in logistics management, but the problem has really entered the public consciousness over the last few years. Especially in the UK, where uncertainty has gone from bad to worse because of the triple whammy from Brexit, the pandemic, and political and economic crises. In 2021, a Gartner survey found that 76% of supply chain executives thought they faced more disruptions to the supply chain than three years ago. 

Big data has sprung up as a possible solution to many of the problems the supply chain faces, and a way to make the process more efficient. However, although the potential is there, the answer isn’t necessarily as simple as “using big data” – companies using software also need to be conscious of how big data works and how to use it properly.

It is fair to say the UK supply chain has faced challenge after challenge in recent years. The first came with the 2016 decision to leave the European Union, which had the government scrambling to find a new way to trade with the European market – and companies scrambling to pick up the pieces.

Then came the Covid-19 pandemic, which caused all kinds of material shortages – for instance, the construction industry saw its stock levels change by up to 5% each quarter. Semiconductors and motors were also affected. These problems were compounded further by geopolitical issues, which resulted in energy shortages and uncertainty.

“Supply chains have risen to a top three position on board agendas,” says Iain Prince, a partner and UK supply chain lead at KPMG. 

This comes on top of the usual challenges of the UK landscape. “UK companies rely more heavily than other countries on items produced abroad,” says Doug Laney, an innovation fellow in data and analytics strategy at West Monroe. “This means supply chain visibility, and predictability is ever more critical.”

The role of big data

With so many obstacles to overcome, the supply chain needs a saviour – and many experts are pointing to big data to fill the role.

Prince believes data will become more important in this new era. He says that after Brexit, “there is uniquely new importance placed on master data, given the customs and other regulatory impacts of moving goods between the two markets”. Also, the greater risks posed in global trade and the need to be resilient mean that the predictive capabilities of data could be crucial.

So, what exactly are the problems that big data can solve?

According to a special report from Thomson Reuters, the biggest factors are traceability (knowing where goods are), predicting potential problems, having plans in place to address these issues, and carrying out customer service. 

Big data can also tackle tasks such as:

  • Efficiency improvements.
  • Detection of anomalies in data.
  • Validation of data.
  • Operations benchmarking.
  • Allowing mobile reporting for real-time optimisation.
  • Better demand forecasts.
  • Inventory management.

Big data can become even more powerful when combined with other new technologies, such as artificial intelligence (AI) and the internet of things (IoT). AI can crunch data to manage operations or make predictions about how the supply chain will react to different scenarios, while IoT makes it possible to access high-quality, real-time data using sensors – for instance, to oversee inventory.

A 2020 study from Oxford Economics found that 49% of supply chain leaders can capture real-time insights from data, and the other 51% use AI and predictive analytics to do the task.

Combined, these innovations are sometimes known as “Supply Chain 4.0” – and they are already having a huge impact on many organisations.

Big data case studies

One Fortune 500 company partnered with software and analytics company N-iX to help manage data related to its supply chains, especially regarding inventory costs. The business wanted to extend its existing system to collect data from departments undergoing expansion, while migrating everything to the cloud for more scalability. It also integrated more than 100 different sources of data into one platform, including historical and future information.

This stored all data in one place and allowed departments to predict data related to other departments – for instance, finance departments could make predictions related to inventory data.

Meanwhile, UK supermarket chain Morrisons used software to boost stock-picking accuracy to 99% for grocery deliveries. The switch was needed when the retailer moved to a new site, where its existing systems wouldn’t function properly. The new software incorporated live key performance indicator (KPI) dashboards that enabled offices to track supply chains and performance in real time to serve customers, while ensuring everything remained on the shelves.

The potential of big data is clear – but to get the best results, the data involved needs to be accurate.

Data quality takes on many forms, including accuracy, completeness, timeliness, precision, and granularity,” says Laney. He points out that most organisations don’t have n-tier visibility in their supply chain, which means they don’t understand what is happening beyond the first tier of suppliers in the chain. They may also have incomplete data on where items are in the supply chain or when disruptions will happen.

This is a serious hurdle. Prince says: “If a business’s data input is poor quality, you can hardly expect the supply chain decision-making that comes out to be high grade, whether using AI and predictive analytics or not.” 

Read more about supply chain disruption

For instance, KPMG’s supply chain predictor assesses the impact of various “what if” scenarios, such as changes in commodity prices or foreign exchange rates. Yet no matter how good the AI and predictive abilities might be, if the data input is all wrong, the predictions will be wrong by default and will fail to benefit supply chain management.

So where do most of the problems lie? Many companies encounter issues by missing key parts of data, such as product dimensions, in their datasets, which can waste time and result in hold-ups further down the line. 

“Another common and actually very commercial issue is lack of consistency in labelling inputs, which prevents collaboration within a large business,” says Prince. “Take naming conventions. If the same components are labelled by even slightly different names across multiple parts of a business, it risks a failure to recognise that the same part is being used, removing the opportunity to benefit from procurement economies of scale.”

Caroline Carruthers, co-founder and CEO of data consultancy Carruthers and Jackson, says: “If incorrect data is used early on in the supply chain, then wrong goods can be ordered, incorrect quantities can be produced or logistics can become backed up.” This can result in the wrong goods being ordered, incorrect quantities, or similar issues, she adds.

While the UK and its supply chain are unique in many ways, it largely shares the same challenges as other countries when it comes to collecting high-quality data. Carruthers says: “Data doesn’t have any borders, and in an increasingly globalised economy, neither do supply chains. Any errors that impact the UK will almost inevitably go on to impact the rest of the world, and vice versa.”

She highlights the need to switch between currencies and units of measure during trade as key examples.

This can sometimes be solved by software and AI, which can notice inconsistencies early in the process. At the very least, it is something companies need to be conscious of.

How to use big data effectively

So, what exactly should a company consider before implementing systems that rely on big data?

“Ultimately, there is no point in investing in improving data quality if nobody in the organisation can actually use it,” says Carruthers. Indeed, a McKinsey study found that supply chain managers are often not familiar enough with data analysis to be able to understand and explore the possibilities offered by big data.

“Data literacy is critical to the proper use of data. Everybody in the supply chain needs to be able to read and understand the data they are being given – otherwise its quality and trustworthiness is of limited use,” adds Carruthers. This also ensures that companies can use the data they get to react and respond to external events quickly.

Another important factor is for everything to be more interconnected to facilitate more real-time data. Laney says: “Increasingly, the systems of those throughout an extended business ecosystem need to be connected at some level. More and more, this needs to be near-real time. Systems and applications that are insular will not position you well for the future.”

The use of IoT can play an important role in this – but again, managers and employees need to understand the data they are receiving.

Finally, Prince advises that it is integral to have a full picture of your supply chain, including the suppliers of your suppliers and the customers of your customers. “It’s also a good idea to have a handle on the risks associated with not having or using the correct data versus the benefits of doing so,” he says.

Now more than ever, UK companies need to make sure they understand what is happening in their supply chains and can react to the challenges that come their way. Big data is one of the best tools around right now to make that possible. But efforts shouldn’t end there – firms also need to ensure their teams can understand the data they receive, and that their inputs are as accurate as possible.

Read more on Data quality management and governance

SearchCIO
SearchSecurity
SearchNetworking
SearchDataCenter
SearchDataManagement
Close