Responding to the Growing Complexity in IT and Security

Last night’s BCS Security Forum Strategy Panel meeting included an interesting round table discussion on complexity. It’s a subject that’s been occupying my mind for three decades ever since I was first introduced to the fascinating world of cybernetics and control theory at Cass Business School in the late 70s. It’s also a current hot topic for many IT and Security professionals who are encountering major challenges getting to grips with the increased complexity of modern infrastructures and systems. Why is this happening? And what can we do to improve the situation?


The starting point is to accept that it’s not just a case of “keep it simple stupid”. In fact that might help or hinder, depending on how you go about it. There are several dimensions to the problem. Some of these are related. But all of them are the result of the structural changes to the way we build systems, introduced by the very nature of computers and networks themselves.

Firstly, there is the problem of scale, caused by the power of networks. The NHS programme is the classic example of that. It’s a fact of life that we will keep attempting to build bigger systems simply because it can be done.

Secondly, there is the problem of increased variety, i.e. the number of states a system can be in. This is caused by the variety-amplification effect of computers: they enable more states to be achieved. (Read the late, great Stafford Beer’s books for more thoughts on that.)

Thirdly, there is the slow but sure change from a deterministic to a probabilistic model for computer systems, brought about by the introduction of networks. No longer can we expect a pre-determined system output for any particular system input. Repeat a transaction and it may give you a different result, depending on the state of the network. As David Tennenhouse, Intel’s Research Director, has pointed out, this is similar to the paradigm shift that occurred in Physics when Quantum Mechanics appeared on the scene: it necessitated a change in skills.

Fourthly, networks are becoming more complex as we move from organic, point-to-point networking towards more efficient but more complex hub-and-spoke networks. Such scale-free networks exhibit very different topological (and other) characteristics, with implications for quality, risk, reliability and vulnerability management.

Fifthly, on top of all this I sense a trend towards less prescriptive, fuzzier methods of decision making caused by shorter business cycles and higher degrees of freedom and personalisation. Tomorrow’s systems could be relatively process-free, with more freedom to improvise. We might also be inclined to employ a more analogue approach to measurement and direction.

In summary our infrastructures and systems are getting bigger, more varied, more adaptive, less predictable and much harder to measure and control. And these trends are unstoppable, unless we elect to become a nation of Luddites.

What can be done about all this? Well the best course of action for any permanent change is to embrace it rather than fight it. There are, however, solutions for both options. Many professionals instinctively react by attempting to simplify the situation. There are techniques for achieving this. You can introduce limits, filters, standards, classifications and rules to reduce the number of states a system can be in.

It’s also possible to break down complicated problems and solutions into smaller pieces to make some aspects of them more manageable. It helps though it might not necessary reduce the overall complexity of the system. You can break software into modules to facilitate testing. It does not reduce the size of the input and output space. However it does enable re-use of modules.

A good architecture also helps to manage and maintain a system. At the very least you can avoid having to scrap a system when only one part needs to be changed. But there is also a danger that an over-engineered archicture might itself add to the complexity of the system. Take the seven layer OSI communication model for example. Do we really need so many layers? We can and should apply Occam’s razor to cut out unnecessary complexity.

But there is an unavoidable principle that most designers miss: you cannot employ a simple control system to control a complex system. There must be an equivalent number of states in the controlling system as the system you are trying to control. It’s a fundamental rule of control theory: Ashby’s Law.

So if we can’t reduce the variety within our systems to a controllable level, what else can be done? Well two further things can be considered. The first is to increase the number of states in the controlling mechanism. This is relatively easy to achieve by the application of technology. Computers themselves enable control mechanisms to be extended to whatever scale is required. The classic attempt at this was Stafford Beer’s heroic efforts in the early 70s to build a cybernetic control system for the Chilean Government. It was never completed because his client died following a military coup.

The second approach is to study the characteristics of complex adaptive systems in order to develop better, more appropriate governance techniques. For example, use simulation to assess risk, quality and vulnerability in hub and spoke networks, rather than relying on crude and misleading calculations based on measures averaged across nodes and branches. But this is a vastly under-researched area. We need much more development because the problems are huge and we’ve only just begun to scratch the surface of what can be achieved.

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

David, Couldn't agree more. The issues of complexity are never as simple as they seem. People play a part - although we should design for simplicity wherever possible (life is complex enough without encouraging it!), simplicity is not always the primary aim of either the designer or the client. A complex system can be small, and a simple system very large, but often 'large' and 'complex' go hand in hand. Size / scale can add complexity in any case, but also tends to flush out 'exceptions' and 'variations' to even the simplest process. These will often be built into the development of the new system 'because the technology can handle it' and because it is superficially easier to let the IT cater for variations than to tell people they need to standardise their process. However, finding the right balance between unlimited functionality and variety on the one hand and and a finite availability of development resources and expertise on the other is an interesting challenge which is not just technical. I don't see any evidence we're getting better at it, either. I think we ought to encourage that debate. Les
Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close