Testing during software development is well recognised as good practice. It helps you gain confidence that the code you are developing is functioning as intended. You can gain confidence in the security of your products and services in the same way. Security testing can be manual, but it can also be automated.
Two particular approaches to development bring testing to the fore. The first, Test Driven Development involves tests being drafted before any code is created. Code is then written to pass these tests. The other, Secure by Construction, requires a specification to be written ahead of time, which code is checked against, using formal methods.
Although these test-friendly approaches are sometimes preferred, other methods such as ad-hoc or after-the-fact unit tests, system tests and end-to-end tests can be used to improve security.
Security testing comes in many forms but it has traditionally relied on point-in-time assessments, such as the 'penetration test' or 'software security assessment'. This kind of testing is often performed manually and is therefore relatively slow and resource intensive, but it can be thorough and allow security specialists to use their ingenuity in ways that aren't possible for other types of testing.
We call such tests 'point in time' because subsequent to the test, your code will continue to evolve and new attacks and vulnerabilities will be discovered. As a result, the confidence generated by such assessments decays.
These issues, combined with a growing trend towards automated build pipelines, may lead to the 'snapshot' form of security testing being undervalued. But you shouldn't cast aside all forms of security testing. That would lead to a very high risk scenario.
A wide range of automated security tools are available to help you quickly and repeatably check for security issues and compliance against organisational policies. For example, you could check for use of known-vulnerable third party software, insecure configuration of infrastructure and insecure handling of untrusted input.
The speed and repeatability of this kind of testing should improve confidence in your code and scale as you grow. Automated security testing can't replace specialist security testers. But, by reducing the burden of repetitive work, it frees them to concentrate on aspects specific to your system, for which automation is difficult. Google's Site Reliability Engineering team's definition of toil applies equally to security automation. Take advantage of what humans are good at, and what computers are good at.
Automated testing tools have a reputation for creating false positives. Where possible, iterate and tweak testing to help reduce this. If too many are happening and it's not adding practical value, perhaps you aren't using the right tool for the job. When discounting findings, assess them first so you don't accidentally miss a potential security issue.
Regardless of how you combine automated and manual testing, security tests can only reveal the presence of security vulnerabilities, they cannot demonstrate their absence. You still need to make a plan for handling security flaws.
There are additional ways to gain confidence in your code. Examples include protective monitoring, and formal verification. Formal verification can be used to specify up-front exactly how a system should function. For example, how memory will be used. This can then be checked against the performance of the running code. It is often used in safety critical systems, but much less so for other kinds of development.
Strategies of automated testing
There are several different approaches to carrying out automated security testing.
Analysis of your code to identify issues or mis-configurations. For example, searching through source code files to identify issues of potential security concern, such as input that is not validated, insufficient memory bounds checking, or incorrectly configured firewall rules.
Testing that requires some of your infrastructure to run against. For example, scanning for unexpected ports listening on systems or 'fuzzing' input parameters to see if aberrant data triggers an unintended event.
Security tests act as a 'gate' during continuous integration and block deployment until they have succeeded.
Tests are run 'off to the side' of your deployment pipeline, so they don't block the deployment process if they identify issues.
Manual security audits
Before conducting a manual security audit, the coverage of your testing should be discussed and scoped to address the risks you are worried about. Any issues that you already know to exist, or which you have testing in place for may be excluded. This will help to ensure that 'expensive' manual testing is best used and not wasted.
Not all security testers are the same. Skill and experience in the technologies being tested are important factors in an individual's ability to identify security issues. Two skilled security testers may identify different problems as a result of approaching the task differently. Keep these factors in mind when selecting people or companies to conduct your testing.
We have guidance specifically dedicated to penetration testing.
Align security testing with your software development lifecycle
Security testing is most effective when it works at the same speed as the rest of the development lifecycle. You can achieve this by adapting existing testing methods to work as part of your build pipeline.
Convert existing security tests to simple, automated tests
Many common security tests and compliance checks can be completed by automated tools and used to control the success of a build. This provides ongoing confidence in your service, reduces security regressions and contributes to faster delivery of production-ready code.
Tailor testing to your application
Although useful in some cases, firing a generic scanner at a 'black box' is not usually that effective. Use your existing build, release and security specialists to add simple tests which are specific to your product. Improve on these over time as you gain experience - and remember, each test you automate frees up human testing resources. Unit tests scale better than humans do.
When a security issue is found, consider writing a new test for it
Automated tests help prevent an issue being re-introduced.
Expose results of testing to your developers
Being able to easily view, understand and discuss the outputs of security testing helps you respond to issues, shares knowledge through the team, and reinforces the importance of the tests themselves. Examples may include visual displays or interactions with communication channels, such as 'chat-ops'. When tests are not working to support the development process, iterate them.
Focus security specialists on testing that cannot be easily automated
Automated security testing reduces the burden of repetitive tasks but it doesn't replace manual security audits by skilled professionals.
Identify slow or manual tests and move them out of the build pipeline
A slow pipeline changes developer behaviour for the worse. Slow tests should be run in parallel to the build, so that they don't block or control the success of a build. Failures should be communicated back the the team for resolution.
Correct or remove failing tests
False positives undermine security testing because people get used to ignoring them. Fewer automated security tests that function correctly and highlight real issues are more valuable than many security tests that produce a deluge of false positives. Tests should remain relevant and effective as a service evolves. Adapt or remove failing security tests, but check they are not real issues first.
Test your tests
In a safe environment, try and make code changes that should get picked up by your security tests. Tests that generate false negatives give you false confidence. Your tests should function correctly.
Build internal skills to support security testing
You don't need to be a security specialist to identify security issues and write tests. Security unit tests are just like any other, except the goal is slightly different.
Maintain awareness of the coverage of your testing
If you are unable to test all of your code, keep track of which parts you are testing and which you have missed. This will improve your overall testing.
These examples are intended to help you assess your own practices, and those of your suppliers. The list below is not exhaustive and should not be used as a box ticking exercise.
|Automated security testing is not part of the deployment pipeline.
||Static analysis tools are used to look for known issues in source code before deployment.
|Traditional security policies and audits prevent the business from achieving continuous integration.
||Dynamic analysis security tools are run in parallel to the deployment pipeline, with automated results delivered to the team's communication channels.
|Specialist security testing is not scoped to reflect the main risks and concerns of the system.
||Tests are created to check for known security issues, which are then continuously run by the deployment pipeline.
|Security tests are badly written. They don't always work or they produce too many false positives. Developers ignore the test warnings and the wider security testing strategy is undermined.
||Specialist penetration & vulnerability testing has been scoped to cover key components of specific concern to the business.
|The results of security testing are hidden from developers so mistakes cannot be learned from.