1. Build your service using a segmented approach.
A system which requires multiple vulnerabilities in independent components to be exploited before unfettered access can be gained makes an attacker’s job very difficult.
No segment or tier of a service should implicitly trust requests from other segments. It should be assumed that the previous segment failed to correctly validate its input and perform its own validation. Depending on your design, input validation failure in components which only receive input created by another component (as opposed to a user), can be a strong indicator of compromise.
2. Anonymise data when it’s exported to reporting tools.
Any performance or reporting tool should operate over a desensitised data set.
We recommend implementing controls to redact/anonymise data within the core service. In this way you maintain control of the process rather than relying on reporting tools to anonymise data for you.
3. Don’t deploy applications or design functionality which enable the running of arbitrary queries against your data set.
These applications undermine segmented system designs by providing an easier path to compromise the system.
4. Do not implement functionality that would be damaging if used by unauthorised individuals.
For example, supporting a bulk data export function makes it easier to export large numbers of records without being detected. Instead, consider implementing single record access via API. This should allow you to design a monitoring service with a greater chance of detecting suspicious activity.
5. Avoid creating caches or temporary stores of data within the service.
These stores are likely to be less well protected than the main data store, but can potentially yield high-value information to an attacker. If a cache or temporary data store is required, then it should operate a data-fading policy that purges records as soon as possible after access has finished, ensuring that minimal data is stored in the cache.
6. Encrypt partially completed forms under a key controlled by the user.
A common requirement is to support form completion over several user sessions, allowing the user to complete the form at their convenience.
To reduce the risks associated with compromise of this partial data store, it is desirable to encrypt the data in such a way that it is only retrievable by the user, and therefore not available in bulk to an attacker who has gained access to the system.
This could be achieved by encrypting the data you store for the user under a key which they hold in a session cookie. Alternatively, you could use a key derived from a user-provided password combined with a salt for that record.
7. Regularly rebuild components that have considerable access to data over a long period of time.
You can reduce the amount of data that malware on a given component can access by destroying and rebuilding it automatically on a regular basis.
Techniques used by continuous integration tools to automatically create and destroy environments can be used to help satisfy this principle.
8. Only handle data which is essential to your service.
Understand the difference between processing and storing data. Do not habitually store all data that you capture unless absolutely necessary.
9. Retain data for the minimum time necessary.
Challenge the need to retain complete datasets for long periods of time. Not holding personal data is the simplest way to avoid its loss.
10. Avoid displaying unnecessary or bulk data to users.
Data views that contain many different elements or display sensitive fields are high-value targets. Screens in services should show sufficient information for the user to perform the function required, and no more.
11. Data model design should allow for tokenisation.
Using this approach, sensitive elements are replaced with surrogate values or tokens which can then be passed around a system without exposing high-value data.
Careful design of the data model and tokenisation mechanism can ensure that multiple compromises would be needed for an attacker to recover sensitive information.
12. Throttle access to data in line with the role and requirements of the user.
- Limiting the volume of records that a user can access at once, and within a given period (eg a case-worker’s shift)
- Verifying that function calls returning multiple records are capped at a reasonable volume and ensure that the database itself does not exceed this return limit.
- Restricting the records accessible to a user based on their role. For example, a user with responsibility for one geographic region could be limited to seeing records for only that region, unless another user approves wider access.
13. Make it easy to recover following a compromise.
Design your service so that it can be rebuilt quickly to a known clean state in the event that you detect a compromise. Make sure you swiftly address the flaw which led to compromise.
Design your service so you will be able to both recover it and maintain the records and data you might need to support an investigation in advance, as if you wait until post-compromise you might find that you have to choose between recovering the system quickly or keeping the data you need for an investigation.
14. Design the service to support separation of duties.
Where the impact of attack, misuse or compromise would be significant, consider requiring the most privileged or dangerous functions in the system to demand two or more individuals working together to perform them.
As an example, consider designing a system which prevents a single administrator from exporting a copy of all data.
If you replace references to individuals with tokens, you could enforce separation between administrators by giving them access either to the data that references a token, or the data that allows tokens to be translated into individuals. Not both.
15. Beware of creating a ‘management bypass’.
A common design flaw is to have weaker security controls in the management or operations infrastructure of a service than in the service itself.
In such scenarios, compromising a single external-facing component can result in privileged access to data stores through channels intended for administrative functions.