Blog post

Of mice and cyber

Created:  24 May 2018
Updated:  24 May 2018
Author:  Geoff E
Cyber Mouse

What’s the difference between a combustion engine and a mouse? (no this isn't the opening of a bad joke).

Answer: a combustion engine is complicated, and a mouse is complex, (I told you there was no punchline).

We use these words pretty much synonymously in everyday English, but there is a crucial difference. This blog expands on the NCSC's research into complex systemsand is of particular interest to those of you who wanted more details behind the theory that underpinned our Risk Masterclasses and Risk Management Collection


The combustion engine, the mouse, and cyber security

The combustion engine is complicated because if it is taken apart and put back together (by someone who knows what they're doing), then it will still work.

The same is not true of our complex furry friend*.

Complex systems appear all around us, and have been studied in great detail across a huge number of disciplines like physics, biology, and economics. Here at the NCSC, we're now spending time researching how complex systems theory can be applied to cyber security. This is because the systems we are charged with managing today are increasingly more like the mouse and less like the combustion engine. And this means we have to rethink some of our approaches.

Whilst there is no universally accepted definition of a complex system**, the literature on complexity usually agrees on certain properties that these systems possess, including:

  1. Many intertwined and interdependent components that interact with each other, and their environment.

  2. The behaviour of these components will change depending on the feedback through these interactions, (which in turn affects the feedback they give others).

  3. The presence of 'emergent behaviour', where the components become organised in some way, (even when there is no overall planning or control).

  4. Change over time, in ways that cannot be:

    • predicted from the original state of the system, (a characteristic we call non-determinism)

    • reset to an earlier state


In the case of our furry friend, you could consider its individual cells to be the components, (apologies to any biologists reading this). These interact with each other through chemical signals, responding as appropriate, whilst also receiving information from outside, through the mouse's senses. This mouse system has many emergent properties (such as 'love of cheese' or 'fear of cats'), and it will learn and display new behaviours throughout its life which cannot be predicted from studying individual cells.

Of course, we can have a pretty good stab at predicting these emergent behaviours, but we can only do this because we have also considered the mouse as a whole system, and collectively we've had the opportunity to study numerous mice systems over the years. By contrast, we are only just starting to do this in our field.


So why are systems complex?

Well let's start at the level of a single computer (it is possible to go down to the bit level, but this gets a bit theoretical for day-to-day applications***). You might be old enough to remember Windows 3.1, which had approximately 2.5 million lines of code. Fast-forward 23 years and Windows 10 has approximately 50 million lines of code. The point is technology (both hardware and software) is getting increasingly dense, meaning there are many more interactions between all the different processes that are necessary to make your computer function.

Of course, talking about a single computer is a bit silly in a cyber security context; each device is just one component in a much larger connected system. We are in the era of hyperconnectivity with more and more technology having wireless interfaces, and now we are no longer limited by the addressing constraints of IPv4, this means more and more things can communicate with each other over the Internet. So at both a micro and macro level, there are ever increasing technical interactions, making the probability of non-deterministic emergent behaviour ever more likely.

And this is before we've even considered people...


More than just the techy bits

When we're thinking about our complex systems, we need to consider far more than just the hardware and software. People are an essential part of any system, acting as individual components with extremely adaptable behaviours. Moving out to a larger scale, organisations can also be components of our systems. This not only includes your organisation, but any other organisation that influences the behaviour of your own. This could be through business transactions, laws and regulations, through your people (via the interactions in their personal lives), or through your technology, (by vendors updating software, for example).

We cannot ignore any of these components when we consider a system, since they all interact, and change their behaviour accordingly. A computer behaves according to the way it is used (and programmed), by people. People behave in the way they need to get their job done, which is affected by the technology they use and the policies and culture of their organisation. Of course organisations will also try to change the behaviour of people and technology by updating policies and hardware/software, as well as by training its staff. These changes and adaptations mean that the system can look very different from what you might expect.


The mother of all complex systems

Some systems are already well studied and undeniably complex. Many academics have turned their attention to the mother of all complex systems: the Internet. How do the properties of a complex system (described above) apply in this case?

  1. It obviously possesses interacting components, (of all kinds).

  2. It has adaptive qualities based on history/feedback. ARPANET (the precursor of the Internet) was conceived from the need to 'adapt' to communications disruption. Today the Internet can adapt to service interruption or degradation through its dynamic routing and Quality of Service capabilities. Plus, of course, there's feedback from all the people living their lives online, interacting with each other and responding to what they see.

  3. It possesses emergent behaviour. The underlying network topology of the Internet is an emergent property, specifically at the Autonomous System (AS) level****. We can also observe emergent Internet behaviour at the social level, such as viral trends which sees people pouring buckets of iced water over their heads or eating Tide Pods!

  4. It changes over time. The Internet today is vastly different from even a decade ago. And while we can predict how the technology might change, it is almost impossible to predict which startup will be the next one to change the world, and how the social changes enabled by the Internet will play out across the globe.

There are also interesting aspects being studied in other fields that we should pay attention to. For example, in finance we can see emergent behaviour from the high-frequency trading algorithms used on the stock markets. The use of adaptable algorithms is on the rise in our field too, and we are starting to see their potential effect in other areas, such as influencing the spread of information through our social networks.

The complexity of the Internet and stock markets is self-evident. What we have to understand - and why this matters to all of us that work in cyber security - is that even a small organisation's system can be complex. This is because of their sociotechnical nature, their connectivity to the wider world, and the inherent complexity of its devices.


What does all this mean for cyber security?

We've established that systems are complex - but what does this mean for security? Well, it tells us that our systems are inherently unpredictable and you can probably figure out why that’s bad for security. It also teaches us to recognise that there may be larger scale emergent security risks  - the ones we really want to know about - that we cannot detect by observing our systems at the component level. But things are not as bleak as they sound...


Towards a risk management toolbox

Unfortunately, today's commonly used security risk management methods and frameworks assume determinism and only consider the component 'view'. This is limiting when, as discussed, we also need to consider the macro-level emergent and non-deterministic behaviour of our systems, as well as its components. As John Y observes in his blog, we need a toolbox of different techniques because "...there is no single method for doing risk management for cyber security which can be applied universally, to good effect". And this is very much reflected in our risk management guidance collection, where we discuss the need for a range of different, complementary techniques.   

By working with different risk management techniques, practitioners can better reduce uncertainty to make more informed decisions. Such a toolbox needs to be able to accommodate the properties of complex systems, as well as comprising component-driven techniques, in order to provide a more complete understanding of risk. Returning to the analogy of our furry friend, we need techniques that let us observe the entire mouse, as well as its individual cells, and the know-how to determine when to use these different approaches.


Taming complexity

Understanding how we can learn from complexity science to improve the security of our systems is still in its infancy. But we can start to consider how design principles such as constraint, analysability, robustness and resilience, can help us 'tame' and manage our systems, and even benefit from their inherent complexity.

Can we begin by controlling complexity? If the level of complexity is far more than we need to achieve our goal, then can we adopt approaches which can constrain it - so that our systems become more predictable and thus more analysable? By restricting the dynamics of systems - so there are less states - we are better able to understand and analyse it, which in turn would help direct assurance activities, and provide more comprehensive security confidence.  

Can we design systems to be inherently robust? Do we know how complexity influences whether our systems can recover from errors, or whether these errors will grow chaotically through the system? A robust system design allows it to endure changes that will undoubtedly arise throughout its operational lifetime. 

Can we even get the complexity to work in our favour? By understanding how different types of systems work at the 'mouse' level, we can start to predict how they might behave and react to interventions, just as we can predict how our furry friend might behave. We can use this to direct our systems into a more secure state, either through the design of the technology or the way we work with our users. Better still would be to design systems to be resilient, using their inherent adaptability to respond to changes (including attacks), while continuing to function.

But let's begin by accepting that we are not entirely the masters of the systems we are creating; currently there are limits to our abilities to predict and control them. Once we have accepted this, we are then able to think more critically, and ultimately, make more informed security decisions.


Geoff E -  Head of the Sociotechnical Security Group 

Kate R - Sociotechnical Security Researcher

*No mice were harmed in the making of this blog.

**There are also other properties that are commonly attributed to complex systems, but we don't have room to discuss all of these here so we recommend "Simply Complexity" by Neil Johnson and "Complexity: A Guided Tour" by Melanie Mitchell as good sources of further reading.

*** We commonly assume that computers are deterministic; if you put the same thing in, you get the same thing out. But it has to be the exact same thing and even a 1-bit variation in a program, or the input to a program, can produce extremely different results, which is in fact why all digital systems can be unpredictable. What how is this possible, I hear you ask? If computers are nondeterministic then they wouldn't be much use! Well of course, but the point is they are not always deterministic; digital systems possess a fundamental paradox of being both deterministic and unpredictable at the same time. In order to substantiate this we need to delve into some theoretical computer science for a moment, specifically Rice's theorem. Rice's theorem states that all non-trivial, semantic properties of programmes are undecidable. In short for these circumstances, it is not possible to construct a single algorithm that always leads to a correct yes-no answer as sometimes it can be both, (algorithms can be non-deterministic). Sandia National Laboratories has done some excellent work in this regard, to prove that relatively simple digital systems can be nondeterministic and display emergent behaviour.

**** Despite not being planned or engineered, it has been observed that these AS nodes follow a power law behaviour (another common property of a complex system), where most are small (with regard to connectivity density), but a few are very large. Furthermore, the reasons (Park, 2005) these findings were surprising is that they did not fit existing models–perhaps even conceptions–of how networks are connected.


Gobind - 31 May 2018
Great thought on table for cyber security assessors in applying risk management
Tim Schofield - 01 Jun 2018
This dovetails neatly with a talk given by @halvarflake at CyCon
Nick Elwell - 26 Aug 2018
I have so many clients and know so many people who should read this - but almost certainly won’t! I put it down to that well known scientific phenomenon - The Ostrich Effect.
Nat Gudgeon - 21 Oct 2018
A fascinating read over Sunday Brunch, thank you. It would be interesting to know how an in-depth knowledge of complex systems theory, and its application, could better inform cyber security risk mitigation in a proactive and predictive, rather than reactive way. Also, the TV show “ Halt & Catchfire,” has provided me with an interesting perspective on the early evolution of the internet, within a gaming-culture context, that I had lived through in blissful ignorance...yes, I am really of that generation.
Tom Quaile - 22 Oct 2018
I've seen this first hand in multi-processor/multi-tasking computing system I worked on. As the load on the system changed over years, several protection mechanisms around message passing that worked well became the very same systems that crashed the system. These were when the system was pushed further than anticipated, but not beyond any spec. This system was all across the UK in telephone exchanges - so not a good emergent behaviour mechanism.

I like the way this applies even outside the systems 7 layer model and into the social levels. Good to see it formalised into the thinking of risk and the connections between 'things' and that they mean.

I think the problem here is that as the new behaviour itself emerges, it becomes a new thing in itself that makes the system more complex, more prone to another level of evolution and a significant leap in complexity.

Leave a comment

Was this blog post helpful?

We need your feedback to improve this content.

Yes No