Blog post

Can we manage our cyber risks?

Created:  17 Jul 2018
Updated:  17 Jul 2018
Author:  John Y
nuclear reactor schematic

It can sometimes feel like cyber security is just too complex for us to understand and manage properly. With such a fast-moving and technically complicated topic, how can we accurately assess our cyber risks, and then manage them effectively?

The view that cyber risk is just too complex to manage is commonplace and understandable. But in this post, we'll explain why we are optimistic that cyber risks can be managed. A few months ago, Kate R and Geoff E introduced the concept of complexity, and discussed how it might apply to cyber security. In this blog, we'll build on that idea of complexity by looking at what we can learn from the safety engineering world - another highly complex domain.

 

Are cyber risks unpredictable?

Suppose we wanted to somehow calculate the 'riskiness' of our digital technology, and then to use this to come up with predictions about how a system could break or be attacked. The first thing to consider is the huge list of variables we'd have to deal with. There are the vulnerabilities in our technology to analyse, and the intentions of threat actors to assess, along with a whole load of other variables, including how much money your organisation has to spend on cyber security. There are techniques for merging these assessments, which are fine in themselves, but they quickly become unworkable in the face of large interconnected systems.

The problem is that we can't precisely calculate cyber risk because it is intrinsically unpredictable. The only way to build up a detailed understanding of how these kinds of system will act is to observe their behaviour, or to model it, whilst being aware of the limitations of the models you use.

 

For cyber systems, is failure inevitable?

On March 28, 1979, the second reactor of the Three Mile Island nuclear power station suffered a partial meltdown, leading to radioactive substances being released into the nearby air and water. In the investigations that followed the meltdown, the sociologist Charles Perrow identified a number of properties of the reactor system which meant that failure was inevitable. Two of these properties are particularly relevant to cyber risk.

The first of these is interactive complexity. In a nutshell, this property refers to systems where there are a lot of connections between individual components. The second property is tight coupling. This refers to the speed with which actions propagate throughout the overall system. If something happening in one part of the system affects another part of the system very quickly, then it can be called tightly coupled. Perrow recognised that for systems which have a high degree of interactive complexity and are tightly coupled, then a large-scale failure is unavoidable.

Perrow identified these properties by looking at industrial systems and the organisations which manage them. As a result, the kinds of failure he was thinking about were accidents, or failures of safety. But let's now apply those two properties to cyber security. Digital systems are both interactively complex and tightly coupled, because that is precisely what we need them to be. They are the tools we use to make organisations responsive and efficient. As a result, applying Perrow's theory of accidents to cyber security tells us that cyber security breaches are basically inevitable, as well as being unpredictable.

 

Do we give up trying to manage cyber risk?

We've just argued that cyber risks are both unpredictable and that breaches are inevitable. What options does this leave us? Do we follow the thrust of Douglas and Wildavsky's statement in their book Risk and Culture:

"Can we know the risks we face, now or in the future? No, we cannot; but yes, we must act as if we do."

Not quite. Practically, this perspective on risk can feel pretty disempowering, even if it is often accurate. But it also conceals a crucial part of this story: techniques have already been developed that can help you to analyse complexity of the sort that Perrow identified, in the systems you manage. And there are tools out there that can help you to model the causality of cyber security risks, which can make some sense of the unpredictability of some cyber systems.

 

We need more tools to manage cyber risk effectively

So, what are these tools and techniques that we can use to help us manage our cyber risks? In the first phase of the NCSC's guidance on Risk Management for cyber security, we introduced two distinct, but mutually-supporting types of risk management technique. These were component-driven risk management and system-driven risk management. It's worth mentioning that some of the system-driven techniques that we introduced in that guidance were originally developed in the safety engineering world, to deal with precisely the issues we described above. But, there is far more to risk management than these two types of technique, and for this reason, we will be publishing the second phase of this guidance in the autumn of 2018.

First, we will introduce quantitative methods, applied to cyber security. We will also present some techniques for analysing the causality of security breaches, such as attack trees and scenario planning techniques. Last, but by no means least, we'll present some practical suggestions around security governance, whilst recognising that there can be no one-sized-fits-all security governance approach. As with the first phase of our risk guidance, our aim is to broaden out the range of techniques available to Cyber Risk Managers.

So, to the question 'Can we manage our cyber risks?' the answer is yes. But a qualified yes. If the cyber risk management profession uses only a small variety of risk management techniques, centred around component-driven approaches, then as Kate and Geoff said, we will be gradually overtaken by the complexity of what we're seeking to manage. But, if we adapt our approach to include a wider range of techniques, we stand a much better chance of keeping our systems, our organisations, and our country, secure in cyberspace.

 

John Y

Risk Research Lead

 

11 comments

Anne Accreditor - 17 Jul 2018
Sorry but this is academic nonsense. Please can the NCSC write some actual guidance on risk and not more of this pontification on risk methodology. We get you are thinking deep and ponderous thoughts about risk. Just come back when you have something for us to do.
John Y - 20 Jul 2018
Thanks Anne. I am delighted that you’re so keen for practical guidance on risk management. You’ll be pleased to hear that we have already published some guidance on this topic (https://www.ncsc.gov.uk/guidance/risk-management-collection), which points out to a range of practical risk tools that you can start using right now. We’ll be adding further guidance to this in September this year, where we will introduce an even broader range of techniques.

After that, we will start presenting some worked examples of how others have practically used the variety of cyber risk techniques we introduce. But, we should be clear: what we can’t do is tell you which cyber risk management tool to use when – your choice of technique depends a lot on the particular risk problem you’re grappling with. There can be no single cyber risk management approach that will work effectively in every situation and every organisation.

It’d probably be more useful for us to talk about your concerns in person. If that would be useful for you, please drop a line to NCSC enquiries https://www.ncsc.gov.uk/contact, and mention that your enquiry is for me – I’ll make sure that I pick it up.
Gareth Croft - 19 Jul 2018
Encouraging to see this thinking and the use of safety engineering approaches. Disappointing that the narrative is still seemingly technology focussed. Humans are intrinsic to these systems too. Understanding and attempting to quantify human risk to the analysis is also a rich source of added value and essential to model a complex system. My experience of safety engineering suggests the operator is still often overlooked or underestimated. Bake people, process and technology onto the approach and cyber security can be better managed.
John Y - 20 Jul 2018
Thanks for your comment Gareth. I completely agree with your point about the need to analyse the people and process as well as the technology, when looking at cyber security risk.
For sure, I could have made that clearer in this particular blog. If you take a look at the range of blog posts we’ve published under the topic of Sociotechnical Security https://www.ncsc.gov.uk/blog/sociotechnical-security, you’ll see a much broader discussion of the people and process elements of cyber security risk management.
Hope you find them interesting.
john miller - 19 Jul 2018
yes if we want we can manage our cyber risks by introducing the different method of an unaware public. we can use the techniques available to Cyber Risk Managers.
D Metcalf - 21 Jul 2018
Interesting as always. I do hope you guys are reading Nancy Leevson as well as doing your own thinking. For me although risk management is part of the answer its not "the answer". If only there was only one!
Nick Tegg - 23 Aug 2018
I totally agree with D Metcalf, we are still in the business of providing assurance to system owners; risk assessment is not the only part of the assurance picture. We must also be able to link controls to risks and undertake an assessment of a range of security controls to ensure they are operating correctly, managed properly and configured appropriately.
Erica Basu - 02 Aug 2018
I really enjoyed the simple language to explain the eco system with the safety systems analogy. So much of good policy is not understood or complied with due to translation barriers like tech jargon. Immensely helpful for large digital economies like India where I am from.
Haxxor - 21 Oct 2018
The aim you should have is to find a way to block DDOS attacks there is a way to completely eradicate the problem you need but to seek the answer.
Stuart - 26 Oct 2018
In contrast to some comments I think this is incredibly useful. It worries me that an 'accreditor' thinks this is nonsense. It may explain why so many of our government systems are constantly fighting to remain 'accredited'. If we continue to keep racing off down various security rabbit holes (encouraged by vendors from the sidelines) without stopping and taking stock we will never get on top of this. This article has created some really interesting internal discussions about how an organisation could/should change the way risks are managed 'practically'. Thank you for your efforts in making us look up from the immediate issues and consider how we can break out of the vicious circle so many risk managers are in.
Matt LD - 20 Nov 2018
Thanks for the thought provoking and articulate article. With greater complexity comes greater uncertainty and this is where safety and security diverge. Safety risk management is predominantly a question of quantifying aleatory uncertainty whilst quantification of cyber (and other malicious) risks suffers from epistemtic uncertainty. I recently led some work investigating how to quantify expert judgement using theory of eveidence. This approach now supports decision making during live SOC operations by combining individual pieces of evidence to highlight the most probable hypothesis for an event. But more research in this area would be invaluable. Thanks again.

Leave a comment

Was this blog post helpful?

We need your feedback to improve this content.

Yes No