Blog post

The Trouble with Phishing

Created:  26 Feb 2018
Updated:  26 Feb 2018
Author:  Kate R

Phishing has become one of the most talked about threats in cyber security and so, quite rightly, organisations want to protect themselves against it.

Products marketed to stop phishing will typically involve training users, and more often than not, this is based around the idea of 'phishing your own users'. These are usually sold as a single package which sends fake phishing emails to your users. Then, if a user clicks they get training information and you get to know who clicks. Over time, the number of people who click reduces, apparently showing an increased ability to spot phishing. It seems really elegant. But if you scratch the surface you find that it is hiding a number of issues.

An impossible task

First things first. No training package (of any type) can teach users to spot every phish. Spotting phishing emails is hard. Spotting spear phishing emails is even harder. Even our experts struggle. The advice given in many training packages is based on spotting standard signs like checking for poor spelling and grammar, and while these can be a good place to start, they can't be used to spot all phishing emails. Bad guys can spell (and some nice genuine people can't).

More importantly, we can't expect users to remain vigilant all the time, even if there were concrete signs to look out for. Being aware of the threat from phishes whilst at your desk (where users are probably most aware of the risk) is hard enough. But phishing can happen anywhere and anytime, and people respond to emails on their phones and tablets, and outside core hours. Clicks happen.

Responding to emails and clicking on links is an integral part of work. Attempting to stop the habit of clicking is not only extremely difficult, but is it what you want? Asking users to stop and consider every email in depth isn't going to leave enough hours in the day to do work. This is why the NCSC have taken a much broader approach in our new phishing guidance, where we emphasise the importance of developing multi-layer defences so you have multiple opportunities to stop a phishing attack causing serious damage.

If a phishing simulation package claims to have achieved a tiny click rate then, quite frankly, your fake emails were unrealistically easy to spot. Which brings us on to....

The lure of metrics

Lets be honest with each other. Phishing simulations aren't just about training. They are also popular because they produce a metric (e.g. 'Last week 60% of people fell for our phish, this week only 35% fell for it'). It appears really positive and encouraging, since it appears to show that something is being achieved, but unless you're careful you might just end up wasting time and effort.

Metrics are extremely difficult to come by in the security space, and having a clear, quantitative metric that can show progress, in an area you care about, can be really seductive. But you need to look beyond the headline figures, and find out what it's really telling you. Is it really giving you an idea of your company's defences against the real threats? The risk of living or dying by this single metric is: what happens when you make the test emails more sophisticated, for example to test spear phishing? This will do terrible things to your click rate. You can get any result you want by adjusting the emails you send out, which is hardly an objective measure of your defences. And if you are on the receiving end of a metric that shows a vast improvement, you should be asking some very probing questions about how the simulation was designed, because it is likely that the emails are just too obvious.

There may still be a role for this metric but you have to know what question you want to answer, and design your simulation systematically to answer it. For example, you might want to know which departments are more susceptible to phishing, or whether a new approach to training has had an effect. This could help you plan your defences and offer support to where it most needed. The important thing is to approach it methodically and design it carefully to avoid any unintended consequences, unnecessary disruption, or a meaningless result.

The consequences of blaming users

While developing the guidance, we discovered a widespread blame culture relating to phishing. Many organisations believed that if users were blamed or punished for clicking phishing emails, they would somehow be able to spot them next time around (and if they clicked again, the answer was more punishment). Quite simply, this does not work, and it can also cause a great deal of distress and even distrust between users and security. There is absolutely no reason that blame and punishment needs to be part of training, and phishing simulations should never be used as a tool to catch people out.

It is not OK to blame the user for the following reasons:

  • It doesn't helpResearch shows that people click for lots of reasons. These could be personality traits (such as being helpful and efficient) or situational (if a person is being busy and stressed). Or, the phisher did a really good job of writing the email. Threatening someone with punishment doesn't change these factors.
  • It undermines the relationship between employees and security. You want employees to trust you and to come to you with their concerns when something doesn't feel right. They can be a valuable early warning system and are a vital part of any monitoring system. Someone who is afraid for their job will not report mistakes.
  • You risk sticky legal issues. With more evidence showing that no one can be expected to spot all phishing emails, punishing people for clicking on emails you've sent starts to resemble entrapment. Always check with your HR department before undertaking any phishing simulations.
  • Training should be about building confidence and empowering users to make informed decisions. Destroying their confidence by asking them to perform impossible tasks - and then calling them 'the weakest link' - is counterproductive.

Even if a user clicks repeatedly, disciplinary action is not warranted. A user clicking three times in row may be a sign of a root cause (such as the requirements of their role, lack of necessary workplace adjustments, a high stress situation, or confusion about the current training material) or just bad luck. If a particular user appears to be struggling, try engaging with them to try and find out if there is another way to help. While there won't be an easy fix in every case, you may find some cases where a different business process, or a different approach to training, can make a real difference.

So where does this leave training?

There is clearly a need to provide information to your users; they are a vital layer in your defences. If just one user reports a phish, you can get a head start on defending your company against that phishing campaign and every spotted email is one less opportunity for attackers...but phishing your own users isn't your only option.

Try being more creative; some companies have had a lot of success with training that gets the participants to craft their own phishing email, giving them a much richer view of the influence techniques used. Others are experimenting with gamification, making a friendly competition between peers, rather than an 'us vs them' situation with security. CPNI have an ever expanding set of free resources to help you with training.

And think about when you actually want to do training. Stopping a user to explain how they should have spotted your fake phish just after a click intuitively seems like a good idea, but if you lock someone's IT until they complete a lengthy course, you are causing a massive disruption to their working day. Besides, if it is really tedious, it might seem like punishment. Even if that wasn't your intention.

Whatever you do, remember that no training technique will get your users to recognise every phish. It's also essential that you don't spend your entire budget on training when you need to invest in multiple layers of defence to build a solid defence against phishing.

Kate R
Sociotechnical Security Researcher


John - 26 Feb 2018
Brilliant, thoughtful article Kate, in the best tradition of NCSC
Ross - 27 Feb 2018
Good article with good points that I have observed myself. However I see a critical omission. I don't track click rates to show improvement, I track report rates. Higher (and prompt) report rates allow you to effectively respond to attacks by tracing IOCs, identifying who else clicked and quickly isolating affected EPs.
Kate R - 28 Feb 2018
You’re right, clicks are not the only thing that can be measured, and each thing gives a snapshot of your defences (although never the complete picture). However, any test needs to be well designed, and analysed carefully, to ensure it tells you something useful for your organisation.

Measuring reporting rates is a good way to have a positive attitude towards testing, because it is a behaviour you can incentivise people to do (through useful feedback for example), rather than defaulting to blaming someone for making an error.
James Linton - 28 Feb 2018
Excellent article, and practically addresses the flaws in the current system. Awareness is great but technology should do as much of the heavy lifting as possible.
David M - 05 Mar 2018
Excellent article and this is a human and machine problem that requires a human and machine solution.

Ironscales use training and awareness to strengthen the user but we are also the first to use AI to pick up on anomalies at the user inbox level and mitigate them in real time. The human and the machine working together. That notion of continuously training staff to successfully prevent the multiple attacks that come in is one we struggle to agree with. The case a few months back of the white house is probably the best example. A dozen or so white house staff (possibly the most well-trained in cyber in the world) fell afoul of a very simple spear-phishing attack from a Gmail account. No amount of training stopped it.
Thomas Smith - 01 Mar 2018
Very surprised to see the NCSC rubbishing the efforts of UK phishing providers in this way - all of whom are working hard to protect their customers in the best way possible. Did you even engage with any of us in this sector to talk about what we do, or ways we could improve? No one I've spoken to in peer companies has had any engagement with you. Very disappointed.
Kate R - 08 Mar 2018
Dear Thomas,

Unfortunately, the customer perception of phishing simulations has become very distorted and poor application of these products and damaging practices have become widespread. This blog was aimed at the organisations who use these products, to help improve their understanding and to encourage best practice.

Security practitioners are often under pressure to undertake phishing simulations, without really knowing whether it is the right approach for them, just that everyone else is doing it. This is not a good approach to cyber security. We hope that blogs like this will enable practitioners to push back on 'everyone else is doing it', so they can do the things that are best for the security of their organisation, and get value out of the products they use. We’d also like to see the challenges we’ve identified act as a catalyst for innovation in this field, prompting vendors to use their existing deep knowledge to develop and market new ways of serving their customers’ training needs. If you have any ideas how we can work with your sector to do this, then please get in touch.
Hans de Jong - 03 Mar 2018
Thanks very much, Kate. We are just working on improving our security awareness programm and we are thinking about implementing the whole phishing package, including carrots and sticks. We were looking to 'punish' repeat offenders by letting them do more online training or attend obligatory classroom training and taking away their internet browsing access until successful completion. Now you mention that disciplinary action is not warranted. Is this claim also supported by scientific data? Is there any hard evidence that I could use to convince my management that it does not work?
Kate R - 12 Mar 2018
Hi Hans,

There is good evidence to show that a punishment approach does not work with phishing:

The idea behind reward/punishment is to change people's motivations so they choose to do the right thing. But choosing to only click good links isn't a simple behaviour where people immediately know the right answer, it requires knowledge, attention and time to decide the correct approach, if it is even possible. Even highly motivated people have a limited capacity to be able to apply those and still get their job done (see research on the compliance budget for more information:

There is research ( - cited in the blog) that shows what does affect whether someone clicks or not. These are things that are innate to an individual (being trusting) or outside of their ability to change (environmental factors). Punishing (or rewarding) the end user will not change these. And if people are clicking links because they are helpful, trusting people who are efficient at doing their job and answering emails - would you even want to change them?

Therefore, we can see there is little to gain from punishment, but there is potentially a lot to lose in the proposed approach. In your case you could lose a substantial amount of productivity if people are unable to continue with their work while they undertake the training and wait for their internet access to be restored. What would the impact on your business be if this happened just prior to a crucial deadline?
Ben (Yellow Room Learning) - 03 Mar 2018
Hi, great article. Agree with a lot of it, especially around the issues of blame culture. It just doesn't work as an effective way of encouraging your staff to do the right thing. As a provider of simulated phishing I will defend the use of it though. I truly believe that is is a great way of exposing people to phishing emails in a safe environment. I will say, however, that both the level of phishing email and the type of follow up education is critical in making it work effectively. For example, we never highlight poor spelling and grammar as a way of spotting a phish, this is an old school view of phishing that is no longer relevant. Our education focuses on URL inspection - the only real way of knowing what might happen when you click a link. I also agree that using it as a metric is not that useful. There are so many factors that skew results that is cannot truly be used to measure whether people are more vigilant after a simulated phish i.e. different staff on annual leave, different email content etc. The last point I will make is that teaching people to phish is one of the best methods of getting people to understand and recognise the constructs of a phishing email, the psychology involved, who to target etc, it's something that works for us. Again, great article.
Kate R - 12 Mar 2018
Hi Ben,

I'm glad you liked the article. I agree that teaching people to phish is a great way to get people to appreciate what is going on, in particular the influence techniques that can be used. It can also be a lot more engaging and interesting for the participants.
A note of caution about URLs though. Even with training, most people cannot recognise a good URL from a bad one, and things like the use of shortened links (which we try to avoid these days as discussed here: ), redirects via marketing sites, or legitimate sites with less-than-legitimate looking URLs really muddy the waters. They are also difficult to preview on a mobile device and it is time consuming to do this for every link. As with all signs of phishing, it is one thing that might be detectable in some situations, but cannot be used for every phish, and once a dodgy URL has been identified it is best to let the computers do the heavy lifting by blocking access to that website.

Leave a comment

Was this blog post helpful?

We need your feedback to improve this content.

Yes No