bluebay2014 - stock.adobe.com

Software development is not the only place where DevOps-style optimisation might be useful

Too much of IT focuses on the importance what is measured, rather than measuring what's important

There’s a reason why it’s good to de-clutter. It’s not that the items being removed are completely or even partially useless. They may indeed come in useful sometime; that’s what makes it so hard to get rid of them. Surely if you have enough space for storage, then you may as well, just in case; after all, they may come in handy one day?

While this may appear true, it is not a good enough reason. No, the main imperative for de-cluttering is that there are other more important things, which are not getting the focus and attention they require or deserve. Why should they be considered more important? Because they have a greater impact on desired outcomes.

Measuring progress based on true outcomes is hard. This is why so many processes (real and virtual) are assessed on more easily measurable numbers – volumes, iterations. They measure work done, but not outcome achieved. This misses the most fundamental question – why does this matter? In other words, what is the end result for the business, customer, or patient?

It might seem in IT that this should not be an issue. After all, storage capacities and densities are soaring, prices are falling and there is limitless capability in the cloud. Added to this, intelligent processes such as machine learning are going to need to be fed lots of data, so might as well collect, save and process all the potentially useful stuff, right?

The wrong stuff

Even if potentially useful stuff is being stored, there are reasons not to try to handle everything. The reality is that most systems gather data that is easier to collect, not necessarily data that is most important – or really useful. This can be seen in plenty of manual processes, where systems are skewed by data that is made important because it has been measured, not measured because it is important. In digital systems this is not necessarily leading to better outcomes, just processes that are automated and accelerated.

“Collecting and sharing ever more data does not necessarily increase its validity or veracity”
Rob Bamforth

Collecting and sharing ever more data does not necessarily increase its validity or veracity either. Errors can more rapidly and readily be propagated if not checked, ideally at every stage. This is an approach many in DevOps or DevSecOps teams would recognise as the way they have to deal with quality and security in rapidly evolving systems. Believable, but not quite right information – fake news in the “real” world of social media, “fake data” in the internet of things (IoT) world – can waste unnecessary time and effort. Even worse, it can lead to fake becoming perceived as real.

A more scientific approach to data assessment is required, but data scientists are hard to train and perhaps even harder to find. This is an opportunity for some form of artificial intelligence (AI). It has long been hoped that increased intelligence will fix the data – garbage in, garbage out – challenge. Recent results even from the massively powerful IBM Watson system have shown this is not always the case. Some of its recommendations for cancer treatment were described as “unsafe and incorrect”. It may have been led astray by the use of hypothetical rather than real patient cases, but this is a critical process where desired outcomes are paramount and technology appears to fall short.

The right stuff

Pure data science alone is not sufficient. It needs to be applied using business user imperatives to guide the focus on the drivers to optimise for the desired outcomes. This is something that again those involved in DevOps have already had to learn. Not all optimisation is equal. The benefits of automation and process digitisation are not evenly spread and a holistic or systemic approach needs to be adopted. For example, the instant appeal of wiping 50% off a two-hour task has far less actual impact than shaving 10% from a five-day task. The former looks impressive, but is not.

Deciding where to start for maximum impact might appear to be an insurmountable, daunting task. However, the approach to process optimisation could follow the same path as tackling any seemingly impossibly large and unknown challenge. Use a 10/80/10 split to assess what is there; 10% is clearly top priority, 10% is clearly not worth keeping or addressing. For the remaining 80%, it is too early to know. So address the 10% at either edge and come back to the 80% afterwards and repeat the same process. Continual refinement.

Just because technology apps make things easier to do, does not mean they should all be done. Despite the power and capability of automation, de-cluttering, editing and prioritisation are becoming even more important.

Take a leaf out of the DevOps approach to problems, or even the manufacturing process optimisation book The Goal by Eliyahu M. Goldratt and Jeff Cox. Look for real bottlenecks, apply continual optimisation, measure what’s really important to the business process, and de-clutter by dropping “we could” and replacing with “we should”.

Read more about IT optimisation

  • Infrastructure optimisation for business application performance.
  • Essential Guide: Optimising hybrid IT infrastructure.
  • A four-phased approach to building an optimal data warehouse.

Read more on DevOps

CIO
Security
Networking
Data Center
Data Management
Close