Blog post

Security and usability: you CAN have it all!

Created:  24 Aug 2018
Updated:  24 Aug 2018
Author:  Emma W
Security & Usuability - you can have it all

An old security joke goes like this:

User: “How can I secure my computer against all cyber attacks?”

Security person: “Easy. Disconnect it from the internet, turn it off and put it back in the box. And who said you could unpack it?”

Like many jokes, this one contains a grain of truth. Nothing can be made totally secure without also being rendered unusable. So we could turn security up to 11 everywhere, and insist all the computers go back in their boxes. But if we did that, our organisations wouldn’t be able to function, and any security wins would be pointless. The role of security should be to support and enable the business, and it should do this by managing its risks without blocking essential activities, or slowing things down, or making the cost of doing business expensive. But how can we do all that? Can we really have it all?

Yes, we can and actually, we must. This blog post explains how making security more usable can help to make an organisation more secure, and includes some practical steps that sysadmins, risk managers and security decision-makers may wish to implement.

What is usable security?

Usability is sometimes seen as purely an end user issue, or something that only relates to user interface design and user experience, but there is far more to it than that. In organisations, usable security also covers the design, development, configuration and maintenance of the tools and systems the business runs on. It covers less tangible things such as how the organisation develops and uses policies and processes, as well as the cultural factors that influence how people approach their work, and approach security at work. 

People often talk about creating the right trade-off between security and usability, but I find this unhelpful in understanding what's really going on. Doing this (imagine a sliding scale with SECURITY on one side and USABILITY on the other) encourages us to think that we can improve security by prioritising features that provide a lot of security in theory, but which actually undermine security in practice. A good example of this is insisting that users create long, complex, and hard-to-guess passwords, and not letting them write them down. In this case, the theoretical security benefits are eradicated because - in practice - users end up writing their passwords down on post-it notes.

Poor usability often = poor security

It's worth bearing in mind that usability doesn’t depend on security (you can easily make a product very simple to use, and also very insecure), but security often does depend on usability. If a product has to be used in a particular way in order to be secure - but people cannot easily use it that way - the product is not secure in any meaningful sense. The seminal paper in this field, Why Johnny Can’t Encrypt, found in 1999 that:

“when our test participants were given 90 minutes in which to sign and encrypt a message using PGP 5.0, the majority of them were unable to do so successfully.”

In 2016 a follow-up paper, Why Johnny Still, Still Can’t Encrypt, found that:

“more than a decade and a half after Why Johnny Can’t Encrypt, modern PGP tools are still unusable for the masses.

PGP should have made it easier for anybody and everybody to send and receive encrypted emails. But because it was not usable enough, it didn’t achieve that goal.

The unsung heroes, not the enemy

Just as there are different ways to use products, there are different ways to complete work tasks, sometimes using entirely different products and procedures. Some will be more secure than others. UCL's prescient paper 'Users Are Not The Enemy' (1999) tells us that people will happily choose the more secure way if it’s quick and straightforward, and allows them to accomplish their task. However, if the more secure way is not usable (maybe it's too difficult or time-consuming, or it stops them achieving their goal), then users will likely find their own solution. Their own solution will get the job done - and let the user get on with their day - but the security team may disapprove. From their perspective, it may look like the user has acted recklessly in putting their day job ahead of security.

Scenarios like this were used to illustrate why 'people are the weakest link in security'. However, this point of view is now outdated. The NCSC increasingly recognise that people are the unsung heroes of cyber security. They need support - not vilification - for doing the normal human things that keep our businesses running, and make them successful. If the security team's approved procedures for accomplishing tasks means that productivity grinds to a halt, are users wrong to seek alternate routes?

So how can I make security usable?

As Johnny's experience with PGP demonstrates, it isn't always easy to make your security usable. You and your team may have to do some work to make things simple for everyone else, but here are some examples you may want to consider.

Aim for secure by default

When designing or buying technology at work, aim for products that are secure by default: they make normal and obvious use of a product, the most secure way to use it. This means you don't have to rely on people to 'do things right' all the time. For instance, many smartphones now come encrypted out of the box, or you can turn on encryption fairly easily - this requires little or no effort, and makes the phones much more secure.

Take the strain

Look for ways to shift the effort of security and making security decisions away from employees and on to the back end, maybe by layering defences (as we recommend in the NCSC's Phishing Guidance and Password Guidance) . Read our Secure by Default case studies, to see how organisations have grappled with these sorts of issues in practice.

Make it practical

Get better value from your security measures by making them more usable. Your security policy manual may cover every conceivable scenario, in painstaking detail, but if this makes it so long that people can't find the right advice when they need it (or understand the language) then your policies are only adding a fraction of the value that they could.

Look at workflows

Look again at the workflows behind everyday tasks. Does security often trip people up, slow their productivity and damage their morale? For instance, how often are people required to authenticate themselves in the course of a working day? Are all those occasions really needed? Try to improve usability and security by better integrating administrative systems or processes. For example, regular password expiry policies can mitigate security risks that come from poor links between HR and IT, but they damage security in their own right.

Support and empower your staff

Trust that people will generally try their best to do things sensibly - and let them tell you when things aren't working for them. To work more securely, your staff need the right information (in terms that make sense to them) at the right times, supported by the right skills, tools, habits and motivation. You can encourage this by:

  1. Getting good feedback. Encourage people to talk to you about security at work and listen attentively to them, as they will be sharing perspectives, issues and even solutions that you might not otherwise see.
  2. Being positive and welcoming. By doing this, people will be more likely to come back again in future. And don't worry - 'being positive' doesn't mean 'saying yes all the time'.
  3. Acting on the feedback in your long-term planning. For instance, if you hear that many people are worried about the security of their mobile devices (but cannot do their jobs without working out and about), that should probably affect your priorities.
  4. Learning from your people: strive for workable security that's already being used unofficially in your organisation, not perfect security. Locally-grown solutions can sometimes be applied more broadly but because usability is contextual, security that's usable for one part of your business may not work well for all other parts.

Usability is so central to building security that works in the real world - not just on paper - that no savvy organisation can afford to ignore it. The only good security is usable security.

Emma W
Commissioning Editor for Advice and Guidance

4 comments - 28 Aug 2018
It is really a good article on this website on cyber security. Thanks
Oludare Olorunfemi - 16 Sep 2018
This is a brilliant article that captures the essence of combining security with usability. The user's perspective and ease of use need to be considered in profering security solutions. Otherwise, it just becomes what is stored on the shelf!
Pete L. - 03 Oct 2018
so, if a feature is disabled because of security concerns and the user makes a workaround to subvert the control one assumes that security failed by this logic, true or false?
Emma W - 09 Oct 2018
Hi Pete, thanks for your comment. You'd first want to fully understand what the person did and why, so you've got the full picture. Perhaps it was a random mistake. But in general, it's very common for people to find workarounds for tools, policies or processes that don't work for them at work. Mostly, they mean no harm - they're just trying to do their jobs.

In this case we could say security has 'failed' in a narrow sense, by failing to block the person from using the feature (presumably, there was a good reason for blocking it!). However, we tend to take the broader view: that there is a far more serious, general failure if security routinely blocks people from completing normal work tasks. That doesn't mean it's always easy to get it right (often it costs money, and needs co-operation from other bits of the business eg IT and HR ), but we think security can only really succeed when it prioritises business needs as highly as security wants.

Leave a comment

Was this blog post helpful?

We need your feedback to improve this content.

Yes No