Blog post

Developers need help too

Created:  06 Dec 2016
Updated:  06 Dec 2016
Author:  Helen L
Developer

For years, security research has been focused around technology. But now - finally - the humans in the system are getting the attention they deserve.

As a security community, we're beginning to recognise the impossible demands we've made of the end user at times - supposedly the weakest link - only then to throw our arms up in despair when things don’t work out in practice the way they looked on paper, blaming systemic failures on the supposed "weakness" of these users. But the end user is not the only human in the system. The competing perspectives, desires and pressures from all the people involved with getting something to market (CEO, security practitioner, accountant, legal expert, safety consultant) cause headaches for developers.

 

What's the problem?

In a world where connectivity is a prerequisite, it's easy to find examples where vulnerabilities in code have been exploited with some pretty devastating consequences. The first SQL injection attack was reported while I was (just about) still at school, yet it is still a significant and recurring problem. The tools to robustly protect against vulnerabilities are widely known. So why don’t developers use them? We don't believe it's that straightforward - perhaps as security professionals we should instead be asking “Why aren't these tools being used? And how do we make sure they are?”

 

What do developers have to deal with?

Consider a developer - with no domain expertise in cryptography - using a cryptographic library API. These are potentially powerful tools to protect data, but used incorrectly they can create a false sense of security. Choosing the most appropriate algorithm and mode of operation is vital. Then selecting a sensible approach to key generation and secure key storage all require fairly detailed crypto knowledge. But, they are prone to misuse, which can lead to vulnerabilities such as a failure to validate certificate chains correctly, insecure encryption modes and inadequate random number generators.

Navigating past the potential security pitfalls in APIs (and other tools) without this specialist knowledge is really hard, and isn’t helped by a prevailing expectation amongst library designers that developers 'are experts' and should therefore know better.

Usability. It is rarely seen as a fundamental requirement for design - let alone for security. We don't think it's reasonable to promote a 'secure' product, but state that the security depends on how it is used. How does a developer ensure that their product is going to be secure enough, no matter how it is used, bearing in mind there's no such thing as perfect security?

Then there’s the time pressures of getting code into production and, for many, embracing continuous delivery. Amazon, in 2014, deployed 50 million changes. That’s more than one change deployed every second of every day. Add to this a constantly adjusting threat landscape, and we’ve a situation where the conversations around security risk being left out because they are too hard, too slow and too expensive.

 

How can we make it better?

Whilst stamping our feet and cursing developers might be cathartic, it clearly isn’t having much effect. We need to invest time and effort into understanding developers and the development process, so that we can re-focus our efforts on creating developer-friendly approaches. We need to motivate and support these professionals to make better security decisions.

To explore this thorny problem in more depth, the NCSC and the Research Institute for the Science of Cyber Security (RISCS) have brought together a multidisciplinary community to start understanding the challenges. This community of leading academics, industry practitioners and government experts spans the social science disciplines through to more traditional technical backgrounds associated with cyber security. From these discussions, we’ll be issuing a call to this community for research proposals that the NCSC will sponsor under the RISCS umbrella over the next financial year.

In the meantime, expect to see some more blogs from us on various aspects of this topic in the coming months, as we explore the key drivers and blockers to secure software development.

6 comments

Pat Keane - 07 Dec 2016
Developers and large providers of business critical software to the public sector also need help. In my previous existence as head of IT in a local authority we regularly had patches and major software releases delivered with known vulnerabilities.We scanned all software before deploying. When brought to the attention of the providers we were almost invariably met with " Well no one else has mentioned this". In the end we hd to invest in specialist software at our cost to ensure we were protected and got through PSN!.
Helen L - 11 Dec 2016
Pat, thank you for sharing this. Understanding these experiences reinforces the need for this work.

As I said above, there's no straightforward answer and it needs a holistic approach. Our research intends to help us better understand the causes of these vulnerabilities, how we can motivate and support developers more effectively, and how we can improve the cyber security elements of the tools and processes they use.

More about our commitment to reduce the load of cyber security on users in local government can be found in our Active Cyber Defence programme (https://www.ncsc.gov.uk/blog-post/active-cyber-defence-tackling-cyber-attacks-uk)
Alex Young - 13 Dec 2016
Developers really need help convincing management (usually senior rather than line). I'm a programmer, and I'm also interested in security. I've seen many security issues arise due to management pushback against things like test driven-development, and management buy-in of technologies we know to have weak security (Windows-based web hosting for example). Also, management outsourcing development to third-parties that aren't properly audited for best practices and code quality doesn't help.

Password security is hugely compromised by bad programming decisions, and this is the kind of thing that can easily be improved by raising awareness. But systems security as a whole is almost always weaker due to bad management. Would SQL injection by an issue if your code was tested by external intrusion detection auditors and a full test suite with 100% coverage?
Helen L - 16 Dec 2016
Thanks for your comment Alex, you've touched on a common observation: getting buy-in from seniors for cyber security is hard. This topic is not only on our radar, but is also being addressed by the NCSC's sociotechnical research as part of a wider cyber security issue. I think the solution could have many facets: being able to collect and analyse meaningful data to effectively model the cost benefits and build a more persuasive argument about the value of certain approaches, using everyday language to reduce the mystique around cyber security making it accessible to everyone, as well as awareness and behaviour change aspects. Understanding and managing trust in the supply chain is also to be considered, as you point out. More on this in subsequent blogs.
Martyn Thomas - 16 Dec 2016
At present, companies make money out of bad software and it's third parties or the users of the software who suffer from the vulnerabilities. Somehow, the cost of defective products needs to fall on the manufacturer or importer of those products, so that they have the incentive to improve their products.

Maybe we need to introduce strict liability for the damage caused by defects - with a five-year period before full enforcement so that companies can make the necessary changes.
John Doe - 21 Dec 2016
This is all very critical. I am doing a PhD in the area and would really like to see how we could mutually benefit each other in the near future.

Leave a comment

Was this blog post helpful?

We need your feedback to improve this content.

Yes No