Blog post

My cloud isn't a castle

Created:  06 Jul 2018
Updated:  06 Jul 2018
Author:  Andrew A
My cloud is not a castle

You may have noticed from my previous blogs, or the talks I’ve given, that I’m a massive advocate for the security benefits that can come with moving to a good cloud service.

And yet, as these services grow in popularity it's increasingly common to hear news stories reporting that a company or organisation has suffered a cyber breach, accidentally leaking or losing sensitive data.

These are events that we, the security community, can plan for and react to. This is true whether we're running bespoke services in-house or hosting them in the cloud.

In this blog post I want to talk about why accidental breaches are continuing to happen in the context of cloud services and suggest some measures that should make future incidents less likely. 

Own goals

Recently, we've seen a steady flow of incidents where organisations using cloud services haven't applied the security settings needed to keep their information private. The result has been, in many cases, that anyone with an internet connection can see their, often sensitive, data.

We're talking here about things like S3 buckets being unintentionally left open, sensitive data being posted to public Trello boards and web service API keys being accidentally checked into GitHub. These are just some examples of the kind of issues that can affect many different types of cloud service, including SaaS offerings.

Many of these breaches have come to light through the work of security researchers who have responsibly disclosed their findings to the affected organisations. To these noble individuals I would like to say a big thank you.

Our heads are in the cloud

So, why are these breaches happening? My theory is that some people are treating cloud services in the same way that they would an on-premise service. One that has centralised enterprise control and oversight. This leads to the assumption that, by default, they get the same control over cloud services as they would the on-premise equivalent.

To use a crude analogy, our traditional self-hosted IT can be thought of as having been built inside a Medieval castle. It was up to us to decide how many portcullises we wanted and the minimum moat width. We could change the shape of the arrow slits when we had to defend against new weapons, and we could choose when to raise and lower the drawbridge to let only the people we wanted inside.

In our heads, we’ve assumed that cloud services resemble our castle. It just happens to be a castle that’s owned by somebody else and shared by several families. In many ways that’s true, especially when you stretch the analogy to suggest that archers on the battlements come as part of the service.

Unfortunately, that way of thinking allows us to forget that many cloud services aren’t designed to directly replicate old IT. Many cloud services are intentionally designed to promote collaboration and data sharing, while still allowing us to constrain access to named organisations or individuals.

In our old on-premise model, making some data available to 'everyone' meant 'everyone within the organisation, but no-one else'. In the cloud it can mean that same thing, or by design it can mean that 'everyone on the Internet can see it'.

Whilst the cloud provider is responsible for delivering a secure platform for our data, we as data owners are still responsible for how the service is configured. This means we need to acknowledge and act on our responsibility to configure the service to only share the data that we intend to, with only the people we want to.

Finding the silver lining

We recommend that your focus should be on reducing the burden on users to make good security decisions:

  • Make it obvious to contributors that they must not submit sensitive data to services - or parts of a service - that have public sharing enabled. This ensures that anyone contributing to publicly visible services will be on the same page about what is/isn’t appropriate to post.

  • Set sharing to be 'off' by default on each of the services that you use to hold internal-only data. And – unless required – entirely disable the ability to make data public. You should continue to use cloud services to share data to named individuals who are otherwise outside your organisation in an intentional, managed and audited way.

  • Identify an individual or a small team as being responsible for your organisation’s use of each cloud service. They can make sure the service is configured as expected and act as an authoritative contact if things go wrong (don't forget to make sure this responsibility is tolerant to people leaving your organisation!). You can then get confidence in such configurations by including your organisation’s use of SaaS in your regular security audits and penetration test.
  • Reduce the desire for your people to use shadow IT. You can do this by creating organisation-wide accounts for the services that your people need or want to use. An individual signing up for services for their own use may not realise that they need to put a bit of thought and effort into the configuration. It’s very hard to audit/monitor something that you don’t know is there.
  • Avoid sharing secrets such as credentials, API keys and password reset emails in shared services, unless they are appropriately protected by the service and can only be accessed by specific authenticated users. This is described in Cloud Security Principle 2, our Secure Development and Deployment guidance.

I don’t think that any of these suggestions will be a magical sticking plaster that makes these accidental data leaks just go away. However, I am hoping that we can make the number significantly smaller.

As always, any comments gratefully received below.


Andrew A
Cloud Security Research Lead - NCSC


Joel Samuel - 06 Jul 2018
A key nuance that is not discussed as much as it should be (I am not saying this article skipped this but an abstract reflection) is that both small (Trello, Atlassian, Slack) and larger (Office 365, G-Suite) associated audit and configurable 'security' characteristics to paid tiers - SSO, audit exports, ability for admins to assume-user and so on.

It is difficult to prescribe paid tiers for popular SaaS products in order to achieve security goals but it may be worth weighing in a future blog - if you posses the capability and funds, you can export audit from Trello/Slack/Atlassian/G-Suite etc into your SIEM tool(s) but this is only valuable when effective and assumes basics (as described here) are already there and the organisation is relatively mature around the same.
Andrew A - 13 Jul 2018
It’s a conversation we’ve had round the office quite a lot. There’s always going to be a push towards free/cheap versions of the products as it can be a challenge to quantify the value for money of those security features. And of course shadow-IT rarely has a budget!

However, in order to help both users and IT teams do the “right thing” from a security perspective, I agree it’ll usually be necessary to use the paid tier of a service to get the features that appropriately deal with the threat.

The other often-unsaid angle on this is that it costs the vendor to run even the free tier of the service. While it’s not always true, they may instead be making money by mining your data and use of the service. It means that someone needs to read the terms and conditions to make sure you're not handing over a license to all your data.
Simon - 08 Jul 2018
One aspect I’ve noted, which says a similar thing in a different way, is that there is a lack of defence in depth in some Cloud offerings. S3 buckets are a good example, they are one policy mistake (in a language which Amazon introduced machine learning to help spot mistakes) from leaking all their contents. They are also feature rich (Web Dav anyone?).

One way for cloud vendors to add that depth back would be simplification of products and settings, so that the products are as simple (by default) as our mental models of them, especially security settings in AWS.

One cloud vendor I used had really simple firewall model, you attached it to a server and set the ports for that server. Simple but effective. In comparison AWS security groups can be horrendous.
Andrew A - 13 Jul 2018
I agree that we should be looking for services that are both secure by default (e.g. forces MFA for all users) and is easy to configure to act the way we intend.

The “keep it simple, stupid” mentality continues to apply wherever humans are involved! However, I think there’s always going to be a balance between the SaaS services that can have simple configuration, and the cloud platforms that by design allow us to build a custom service using a wide variety of configurable components. In those latter cases I think we need to acknowledge the complexity and look for services that give us the tools to check our homework and help us abstract the impact of that underlying complexity.
Max Pritchard - 11 Jul 2018
The reduction in size of IT teams alongside the increase in technical capabilities has led to the rise in SaaS and cloud services. That, in turn, has led to the disintegration of device-based network perimeters. Defending castles has turned into defending villages, roads and towns engaged in constant commerce.

The principles of securing the new environments have not changed considerably, and castle analogies still have their place. Perhaps business just needs some help to build castles in the clouds? Use tools to visualise your cloud assets, group them, and automate the deployment of a secure starting configuration when the asset is created, regardless of who created it. Use multi-factor authentication to administrate assets.
Giles Montgomery - 11 Jul 2018
As organisations use more and more SaaS, PaaS and IaaS the decision tends to be made by a business unit within the org. Security and IA professionals can help these decisions by utilising an Enterprise Class Cloud Access Security Broker (CASB) solution. Gartner are advocates of CASB and suggest that 80% of all Enterprises will use them by the year 2020. Keen to have the NCSC thoughts on CASB....?
Andrew A - 18 Jul 2018
We have customers that find products like CASB useful to log when labelled data leaves the organisation and block it where appropriate. However, we don’t see such products as a magic security bullet as they need that context and underlying intelligence to differentiate whether users are simply using a service or whether they’re interacting with their org’s corporate data in an inappropriate way in the service.

As you may have seen on one of our NCSC IT posts, we have chosen to instead use a web proxy and netflow to govs us at least at a high level the traffic flowing in and out of our devices. Rather than try to block our users putting data in cloud services that it shouldn’t be, we reckon that Azure RMS’s encryption will be sufficient to protect any documents that accidentally end up outside of our O365 tenancy. It also means that we haven’t needed to do the work that would be necessary to get the high level of confidence in a CASB deployment/service – such a service would be capable of seeing all of the content from inside our encrypted HTTPS channels and so would need to be as trusted as our administrators’ devices, identity provider and document storage solution.

Our approach is just one way of solving the risks that we’ve identified to our own data. To others, I reckon that CASB products will be useful to some but their value will depend on what you’re trying to achieve with it and what other mitigating technologies you may already be using.
Andrew - 16 Jul 2018
This is a very useful blog. I have already been able to cite it in a debate with some of our digital team to explain why we definitely won't be using certain cloud services in our department - an argument I'd previously been losing. Thank you for providing this and highlighting the risks of these services. They're far too share-y by default for our use!

Rory M - 19 Jul 2018
The challenge of cloud in a lot of ways is that it reduces the resiliance of the environment to individual security mistakes, by directly exposing systems to untrusted networks. To paraphrase a presentation I saw from Dr Levy some years ago, the FUT of these systems is 1 or even 0 (FUT being... Mess Up Tolerance)

Whilst no-one would argue that perimeter firewalls in traditional enterprise networks were a pancea, they did serve to hide a lot of problems from casual external view.

When you combine exposed systems with increasing indexing of public data and easy searching (e.g. Shodan, Censys, Rapid7 Opendata) it's trivial for attackers to find these misconfigurations an exploit them.

Carol - 18 Sep 2018
I have been briefing our management regularly in the past 2 years on the fact that embracing Cloud technologies is changing our entire cyber security architecture and needs investment in new tools so we can govern it as reliably as we did with our traditional IT architecture. But it isn't just the technology that needs to be changed, though it is often the focus. Traditional Acceptable Use and Information Assurance policies need to focus on behaviours and making every individual understand they have personal responsibility for how data is used, and help them take ownership of that. But that change is a cultural one which is notoriously hard to transform and relies on groups such as HR and Legal who tend to have very traditional views of staff policies.
Andrew A - 20 Sep 2018
I agree that there’s a significant hearts-and-minds angle here. We’re also seeing that nobody wants to be the first to make any significant change in approach, philosophy or solution.
As you’ll have seen from some of my colleagues’ blogs, we believe that we often need to fundamentally change our approaches and policies so that the technology makes it easier for people to do the right thing. And that’s a huge change from a traditional (and ineffective) culture blaming the user if they don’t use or deploy technology in a secure manner – whatever that means.

Leave a comment

Was this blog post helpful?

We need your feedback to improve this content.

Yes No