Blog post

NCSC IT: how the NCSC chose its cloud services

Created:  08 Jan 2018
Updated:  12 Jan 2018
Author:  Andrew A
Part of:  Cloud security
Cloud services

So far in the series discussing how the NCSC built its IT we’ve talked about our design principles, the resulting architectural principles and some of the practicalities of building our network in the cloud.

I’ve blogged recently to talk about some approaches we’ve taken to getting confidence in cloud services, and why transparency is important. Following on from that I want to give a few examples of how we got enough confidence to use our chosen cloud providers. You’ll notice that a lot of this relied on the cloud providers being open with us about how they run and maintain those services. We haven’t found a good answer to how to engage with less-transparent services, beyond avoiding them.


User needs come first

One of the things you’ll often hear our Chief Architect say is “a highly secure solution that no-one uses isn’t secure at all”. There’s an extra nuance to this too; people will only choose to use a system that helps them get their job done. Therefore we needed to focus on providing the right tools and ways of working.

We’ve learned a lot from how the Government Digital Service goes about collecting user needs – as they describe in their Service Manual – and so chose to take a similar approach. A selection of the things we heard from our users included:

  • I want to collaborate on documents with my colleagues so that we can work more efficiently
  • I want to be confident that my files are backed up somewhere else in case I break my device
  • I want the Internet to work as well as it does on my computer at home
  • I want the same services to be available both on my laptop and my phone
  • I want to be able to collaborate with an external organisation using the web platform they’re already using

We also have a duty of care to protect our users from some of the nastier bits of the web, and to make sure that we apply certain controls (such as retention of documents for our corporate record and to comply with theFreedom for Information Act).


Protecting the keys to the kingdom

We quickly realised that we’d need quite a lot of confidence in at least some of the cloud services we’re using. We reckoned that the most sensitive datasets would be in our emails and our document management system. To protect this data, we’d also need similar confidence in identity providers that make the decisions about who can access what data, and anything used to manage our devices. This meant that for the NCSC IT, we needed to focus on Office 365 and our two IaaS providers; Amazon Web Services and Microsoft Azure.

We turned to the NCSC Cloud Security guidance to assess each of the services, looking for good answers to each of the 14 security principles. We wanted to carry out the full assessment as suggested in the guidance as were looking for confidence in some of the less measurable aspects of the service. This included:

  • the amount of effort the providers put into maintaining and securing the services
  • the odds of them detecting an intrusion
  • understanding how our data would be separate from other users of the service

We acquired this confidence in a few different ways:

  • The providers had produced formal responses to our 14 security principles. These meant we didn’t need to search lots of documents to get the answers to our main questions.
  • Many of the responses to our security principles were backed up by independently auditable international standards, which meant we didn’t need to blindly trust the claims being made.
  • The providers had published good-practice guides to help security conscious enterprises get their initial configuration right.
  • We’ve been fortunate enough to work closely with Microsoft and Amazon Web Services over the last few years. Their openness in those conversations including imperfections and future aspirations has given us confidence that each of them know how to run a service that is good enough to secure the data in question. It’s also meant that when they make security claims we have reason to believe them.

Traditionally someone would have spent a lot of time analysing the answers to the 14 principles to come up with a multi-coloured bespoke spreadsheet describing exactly why we chose to trust the service at that moment. We didn’t do that. Instead we looked at the vendor responses to the principles, used some of the other sources to figure out if the answers were believable then decided whether the responses met our needs.

They did.

Then we simply got on with using the services, following the vendors’ blueprints and good-practice guides during setup and configuration.


And that amount of security was just right

Our users told us quite a lot about how they wanted to do their work and the types of tools that would help. One of the common themes was that we wanted to use much of the same software and cloud services as the rest of the tech sector, rather than us sticking to the more legacy or bespoke ways of working that we’d had to use in the past.

One of the challenges of wanting to use a variety of SaaS services is that it’s time-consuming and often difficult to do a full security assessment against the service in the way we’ve done for AWS, Azure and Office 365. We quickly realised though that we didn’t need to get that same level of confidence in some of the other services we’re using, because the reputational risk of something going wrong in those services was much lower.

In a previous blog post I discussed some of the ways you can get confidence in security promises of a cloud service, and uncoincidentally, some of the products we’re now using were the ones mentioned. Some of our preferred services didn’t publish as much info about their service online as we’d liked. We used those services to do some discovery work on an 'due diligence' approach for SaaS services designed to ensure that the services at least got the basics right.

Let me talk you through some examples:

  • Our technical writers wanted to use Atlassian Confluence to collaboratively draft the guidance and blogs before they’re published on our website. As the majority of that data will end up being published on the web anyway, we were happy to rely on the claims made in Atlassian’s published security white papers.
  • As we started to develop services to support our Active Cyber Defence strategy, we wanted to use the industry-standard Git to manage our source code. As we’re already using it to collaborate with other government projects, seemed like a good choice for our public and private repositories. We applied the previously mentioned SaaS guidance to the service to check that it did some sensible security things. To mitigate the risk of us not knowing a lot about the service, we make sure we scan our code for credentials, SSH keys etc. before it’s checked in, which in any case is good practice for code that may in the future be made open source.
  • Stuart G mentioned in his recent blog that we use Windows Defender's cloud backed protections as recommended in NCSC's EUD Security Guidance for Windows 10. We have configured the product suite so that it avoids sending sensitive data (such as documents or emails) to the Defender cloud service. As long as we're willing to trust that Defender acts as described then we don't need to do any further work to build trust in the associated cloud service as our sensitive data won't be uploaded. You can read more about applying this philosophy to other products in our recent guidance about managing the risk of cloud-enabled products.


Here’s the silver lining

I’ve talked about a few of the SaaS options we use, and you can see that we've put more effort into getting confidence in the security of some services than others. We’re hoping that you can similarly identify which of your cloud services you need to focus your time and effort on, and just as importantly which ones you don't.

We’d love to hear what’s given you the confidence to use certain SaaS services – please drop us a line using the comments below.


Tom L - 08 Jan 2018
An interesting blog! Two questions spring to mind:

- You imply that you let different user groups pick tools they like, rather than run a procurement exercise. Does this not contradict public sector procurement rules about fair selection of products and services against requirements, versus 'we wanted to use X'?

- When you say "used some of the other sources to figure out if the answers were believable" - could you elaborate on what these 'other sources' are, and how other departments can take advantage of them?

Many thanks for this informative article!

Andrew A - 01 Feb 2018

Thanks for taking the time to comment. Sorry it has taken me a while to reply.

I suspect I’ve over-simplified things in the blog, and confirm that we do not contradict public sector procurement rules. Like other organisations, our approach will be driven by setting requirements and then going to the wider market to source solutions to those requirements. In doing that we will usually look at what is already available in the market and consider user preferences, but we would not let subjective user preferences towards specific products and services dictate the chosen solution. In addition, public sector procurement rules would only kick in at certain financial trigger points, and in many cases we’re below those trigger points (sometimes because we’re using freely available services or installing the free and open-source versions of products).

With regard to other sources, I’m referring back to the those I discussed in the previous blog
Mark T - 09 Jan 2018
What approaches would you suggest for small businesses to gain similar levels of assurance?

Some 99.3% of all UK businesses are small. They simply do not have the clout to get cloud service companies to provide "...formal responses to our 14 security principles".

My feeling is that cloud service providers need to do a lot more to publically demonstrate their levels of security which have been indepentally audited and verified.
Andrew A - 22 Jan 2018
Hi Mark,

We would love to see more cloud providers being more open with their customers about how they run their services, including independent audit of those claims where appropriate. In a previous blog post [link to:] I talked about getting confidence using the materials already published by some providers. The majority of these approaches are lightweight and can still give enough confidence for many data sets and workloads.

The other place worth taking a look is the GCloud Digital Marketplace [link to:]. The framework includes a number of security statements for each listed service. The questions that vendors are required to answer for each listed product were derived from the OFFICIAL threat model as used across the public sector. My understanding is that the services that give good answers should be appropriate for the majority of services run at OFFICIAL.

Finally, I think there’s a certain amount of needing to vote with our feet. Where there’s a formal requirement to get high amounts of confidence, we can choose to consider the amount of information, transparency and external validation available for each supplier when making a purchasing decision.
Julian Knight - 18 May 2018
My view would be that any vendor not willing to demonstrate alignment with just 14 sensible and standard principles would be one that I would totally avoid! It isn't as though the principles are something wild and off-the-wall, they are a good summary of common sense.
Linus T - 10 Jan 2018
> As the people responsible for the Git, seemed like an obvious choice for our public and private repositories.

GitHub are just users of the Git tool / protocol - they're not responsible for it. See also and
Andrew A - 12 Jan 2018
Hi Linus. Thank you for your post. You’re very right and we have amended the blog to reflect your comment.
Scott Nicholson - 19 Jan 2018

First, great article and loved the presentations on this at Cyber UK17. Can I ask what your position is in terms of enabling online access for 365 and the use of multifaceted morning authentication on SaaS services?

From an Infosec perspective I’m clear on the fact it should be included for any SaaS service processing company data I just couldn’t find what your position was in the guidance doc’s?

Would also be good to understand whether you’ve made use of Microsoft’s Cloud App Security ?
Andrew A - 22 Jan 2018
Hi Scott, thanks for the kind comment – we love to hear such feedback!

We have chosen to only allow O365 to be accessed from devices that we directly manage, which is enforced using Conditional Access on ADFS. It means that it’s not possible to log in from other devices on the Internet. The use of CA effectively makes our devices a second factor as it prevents attacks from being brute-forced from the Internet. We also force use of Microsoft MFA for any users that have any administrative privilege.

We’ve decided to not use Cloud App Security at the moment. We already use a web proxy for traffic flowing in and out of our devices, and reckon that Azure RMS’s encryption will be sufficient to protect any documents that accidentally end up outside of the O365 service. Whether it’s a useful service for you will depend on what you’re trying to achieve with it and what other mitigating technologies you may already be using.
John Glover - 11 Apr 2018
Hi Andrew,

I'm not sure if you are aware but the UK MOD went through this thought process when they utilised the G-Cloud to source a secure collaborative work environment for external engagement. See:

DE&S recently discussed Defence Share (powered by Kahootz) in their recent magazine (page 11):

As you can imagine, this was a huge cultural and operational shift for them but it has really opened up their ability to work with suppliers, partners and SMEs outside of their private networks.

We, for our part, publish how we support your Cloud Principles and have encouraged other vendors to do the same:

Rob - 17 Apr 2018

How is access to Office 365 provided - do devices access it directly (albeit they will be domain joined and meet other criteria for conditional access to work) or do you route everything via a foundation grade VPN which is what you recommend secure endpoints do before connecting outbound to O365?

If the former, does the reliance on TLS over which you have no control (in terms of the certificates used, revocation interval etc.) not contradict the 12 security principles for endpoints - i.e. the use of foundation grade VPN for protection of data in transit?

If the latter, do you split tunnel which again is against recommendations and indeed the Security Principles for VPN clients/gateways, although other documentation discusses risk managed approaches?
Andrew A - 06 Jun 2018
Hi Jonathan. We are currently routing most of the traffic through our corporate VPN, which we then use as a condition at authentication in Azure AD Conditional Access. The one exception is that we use what we describe as a managed tunnel to allow traffic from Skype for Business outside of the VPN to improve audio/video performance.

We got to that place by looking at the documentation in Microsoft’s trust centre which gave us confidence that O365’s use of TLS aligns with the standards described in our guidance ( Given the large number of URI’s used by O365 we realised it would be a burden to try to maintain the list of trusted services that make up O365. We chose instead to specifically allow Skype for Business (and Teams) audio/video data outside of the VPN as it improved user experience at minimal security risk.
Dennis Lloyd - 22 Dec 2018
While I can see that your decision to allow Skype for Business audio/video data outside your VPN ought to have “improved user experience”, I wasn’t quite so sure about the adjoining statement that doing so was at “minimal security risk”. Why did you feel that your Skype for Business audio/video data presented a minimal security risk compared to your other Office 365 data?
Allister Frost - 19 May 2018
Hi Andrew,

Thanks for the insight into your Cloud selection.
I wonder given the Spectre and Meltdown CPU flaws were announced less than a week before you published, did you have time to factor those risks in or if the vendors gave you a warning earlier and the risks had been fully investigated in time?
Andrew A - 04 Jun 2018
When we made our technology choices, we carefully considered how seriously our chosen providers took their patching responsibility. All our chosen ones made clear public statements as to what we and others should expect. I don’t think we would done anything differently had we known about Spectre and Meltdown prior to making the decisions. I say that because the cloud service providers that commit to patching their systems/software/firmware are the ones that could then defend against these CPU issues along with any other security vulnerabilities that may be identified in their service.
Roger - 19 Oct 2018
Setting up as a micro business 3 years ago, I set upon the route of Office 365 for mail and apps. For storage, I went with Tresorit, which in effect is based upon Azure but configured with just enough access for that very specific service to work.
This takes all the headache out of keeping upto date with any changes and upgrades etc.
I've found both scaleable since starting and when I lost a laptop due to HDD failure I was confident that everything was safe and simply a case of reinstating the hardware and synchronising Office 365 and Tresorit.
Effect on my business was the cost of a HDD and a day to replace and sync. The limiting factor was internet bandwidth.
I know that the purists here will argue many points, but for a small business, this setup has given me some big boy security and resilience, with only a little undertsnding for under £30 month.

Leave a comment

Was this blog post helpful?

We need your feedback to improve this content.

Yes No