Elnur - stock.adobe.com

Building a cyber-physical immune system

Shantanu Rane, researcher in cyber-physical systems security at the Palo Alto Research Center, explains how our own immune systems can inspire the design of modern cyber-physical systems

Security is a relatively recent consideration in the design of cyber-physical systems (CPSs). Large CPS systems, such as industrial assembly lines or the power grid, were not originally conceived with security in mind. Higher priority was given to the functionality provided by the system’s components, their interoperability and ability to work reliably over long periods of time.

But with today’s increased connectivity, critical infrastructure is susceptible to a wide range of attacks. These attacks are designed to impede the system’s essential functions, such as causing a power blackout in a city block, which may have serious effects on the safety and security of people who rely on that system.

So it is not surprising to find that militant groups and nation-state actors seek to make critical infrastructure attacks a key component of their cyber warfare strategies. The variety and complexity of these attacks make defending CPS systems a serious challenge.

There is no shortage of technology to secure data, both at rest and in transit, but these approaches do not work against attacks that originate in the physical world, including attacks involving sensors, actuators, transducers and controllers in a physical environment. Perimeter security measures, such as firewalls and access control systems, can deter or prevent cyber attacks originating from outside the system, but do not protect against insider attacks, which are typically initiated by agents familiar with the system.

Much work has been done in designing network intrusion detection systems, but these systems tend to learn slowly and are generally helpless against unknown and adaptive threats. Researchers and practitioners generally agree that various approaches will need to be combined to achieve some measure of CPS security, but it is not clear how exactly this will be accomplished. This is where the metaphor of the human immune system appears useful.

The human body is extraordinarily self-aware. When it senses an abnormal event, such as an injury, or an infection by a pathogen, it begins to engage in protective mechanisms, such as fighting infections by means of white blood cells, proceeding to repair wounds by clotting, and so on. You could think of this as the body performing anomaly detection.

Over evolutionary time, the human body has adapted to detect a bewildering range of anomalies – microscopic pathogens, different kinds of injuries, allergies and changes in the environment, for example. Of course, CPS researchers do not have the luxury of evolutionary time on their hands, but it is worth noting that the body under stress is responding to physical, chemical and biological stimuli.

A small subset of these responses are taught to the body. Vaccines, for example, are the body’s equivalent of trained intrusion detection systems. Clothing is an equivalent of perimeter security. However, the vast majority of responses are involuntary and automatic, as if the body maintains a model of itself and knows when that model has deviated from its nominal condition.

One of our research activities is to re-imagine CPS from this immune system perspective. To create a cyber-physical immune system, it must, like the human body, become self-aware. In other words, it must maintain a model of its own behaviour. But this is easier said than done.

To build a credible model of its own behaviour, the system must not just learn its digital behaviour, but also capture the behaviour of its physical subsystems. One way to achieve this is to represent the behaviour in terms of physical laws. For example, moving parts of a system will obey the laws of motion; parts of a heating subsystem will obey the laws of thermodynamics; and electrical installations will obey current and voltage laws.

In theory, it is possible to sense relevant physical quantities, apply the correct physical laws and then detect departures from expected behaviour. These deviations suggest that the system might be functioning abnormally, because of to its own wear and tear, spontaneous failure, or concerted malicious activity. Anomaly detection, in principle, operates in this manner, but it has been applied rather narrowly to specific subsystems.

That work needs to be expanded to capture wide-ranging aspects of cyber-physical behaviour in the context of a self-aware system.

Take, for example, a drone or UAV (unmanned aerial vehicle). If its altitude and GPS (global positioning system) sensors are hacked, it could be forced to stall, drop in altitude and crash. But an internal immune system could detect inconsistencies, such as the friction being inconsistent with expected air density, an unexpected cabin pressure difference or an accelerometer reading that disagrees with climbing behaviour. Then actions can be taken to regain control, or at least to notify the control station.

The approach described above must necessarily reach beyond the security community. Traditionally, cyber security has been the stronghold of security engineers, network engineers and cryptographers. But to build a cyber-physical immune system, it is necessary to engage with experts who work on its non-cyber aspects. These include applied physicists, chemical engineers, control theorists and other domain experts who are able to build and connect models of various subsystems, and reason about deviations from safe and secure behaviour.

To realise the dream of a cyber-physical immune system, we require nothing less than a truly interdisciplinary enterprise.

This was last published in June 2019

Read more on Hackers and cybercrime prevention

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.