Blog post

There's a hole in my bucket...

Created:  21 Jan 2019
Updated:  21 Jan 2019
Author:  Nigel C
Part of:  Cloud security
Leaking bucket

In his 'My cloud isn't a castle' blog, Andrew A discussed the general challenge of preventing leaks from misconfigured cloud services. Here we look at a specific service which we've been asked about: Amazon Web Services (AWS). 

It seems like every month there's a new announcement about an organisation suffering a data leak from an improperly-secured Amazon S3 bucket. Either the bucket was made public (exposing sensitive files to anyone on the Internet), or was left open to any authenticated AWS user (anyone can sign up to AWS to become an authenticated AWS user).

However, there's not much public discussion about why so many buckets end up exposed. A first thought might be 'someone forgot to lock down the access'. However, S3 buckets - by default - can only be accessed by the bucket owner. The owner must decide to make the bucket accessible, meaning data leaks are not due to users forgetting to lock them down. Something else must be going on....

 

It's not always good to share

While some data leaks may simply be a case of storing sensitive data in the wrong bucket, I suspect that many of these leaks are a consequence of S3 buckets being used as 'just another file storage system'.  Despite some superficial similarities, S3 is not a POSIX file system like Windows or UNIX, and assumptions based on POSIX file systems can mislead. Working with S3 without understanding how AWS works will lead to frustration and/or insecure data.

Access control for S3 buckets can be complex as it is implemented through two different systems: the older coarse-grained ACLs (Access Control Lists), and the newer 'IAM Policies'. IAM policies are powerful and fine-grained (perhaps too fine-grained for purposes like simple file-sharing). For users unfamiliar with IAM (and its use of JSON) there's a steep learning curve, so it's perhaps not surprising when people on a tight timescale simply make the bucket public. Do any of these justifications sound familiar?

  • 'It's only for a little while.'
  • 'It's only for the demo. I'll look up the proper way to do it later.'
  • 'Nobody will find it anyway.'

Unfortunately for these people:

  • It's easy to forget to remove temporary permissions.
  • It's easy to forget to implement correct protections when the demo transitions to a live system.
  • People will find it: there are lots of people hunting for open buckets. It's sensible to assume that if a bucket is public, its contents have already been copied.

And, while 'world-readable' with local files really translates to 'readable by anyone on the system', with S3 buckets it's literal. 

 

Stopping public exposure

AWS has introduced a new feature: Amazon S3 Block Public Access. This can be enabled across a whole AWS account, or just on individual buckets. It's an easy way to prevent users making buckets public (or, depending on options, making any files public).

You should enable Block Public access. If you require publicly-readable buckets or files (perhaps for a simple website), the simple solution is to move these to their own account that's used only for publicly-readable data. This also reduces the likelihood of somebody accidentally putting a sensitive file in a public bucket.

 

How to use policies to secure a bucket

I strongly recommend reading the AWS IAM guidance and the AWS guidance on managing S3 permissions. I also recommend you watch this recent talk at AWS re:Invent;  it's only an hour long, and it clearly describes the overall AWS approach to policies. To summarise:

  • Policies can be identity-based (such as those attached to users, groups, or roles) or resource-based (such as those attached to S3 buckets).
  • Policies don't nest or inherit, which keeps things simple as the combination of all applicable identity-based and resource-based policies controls whether a file in a bucket can be accessed.
  • If any policy explicitly denies access, access is denied.
  • Otherwise, if any policy explicitly allows access, access is allowed.
  • Otherwise, access is denied.

However, there is a lot more to policies than this simple summary, and like all permission systems they have their own quirks that have to be learned.

 

Make it easier to be secure

Whether you're a manager, an account holder, or a user, there are some design and configuration choices that can help:

  • Anyone expected to implement access control on S3 buckets needs at least a basic understanding of JSON and how IAM policies work - there are no shortcuts. If they are not already familiar with these, include time for them to learn, whether through self-study or a training course.
  • Enable Amazon S3 Block Public Access across the account to prevent users taking unsafe shortcuts. Create a separate account to handle public buckets or files if required. AWS Organizations can help with managing multiple accounts.
  • Create appropriate policies in advance. Someone is less likely to make a mistake if they have ready-made policies (and instructions on how to use them) relating to (for example) 'everyone in the company', 'everyone in Project X', 'everyone in Accounts', etc.
  • Name these policies in a clear and obvious way. This also makes it easy to use custom versions of the usual AWS managed policies. For example, instead of using the managed policy that allows access to any bucket, the 'everyone in Project X' policy might allow access only to the buckets of Project X, limiting the damage if a user is compromised.
  • Consider how policies will interact. Careful thought here can simplify the management of policies and make them easy to understand, such as using a bucket policy to control read-only access to a bucket, and giving write-access to a publishing role. Doing this carelessly, on the other hand, can create a maze of policies that make it hard to determine just who has access to a particular bucket. Since policies don't nest, combining a number of simple policies is likely to be easier to understand than creating complex ones.
  • Roles can be a good way to give some users extra permissions, but be careful about accidentally creating a role that anyone can assume.
  • Use one of the many security tools for detecting public buckets and files. Amazon provide their TrustedAdvisor service, which can report on public S3 buckets, but there are many others.
  • Enable object-level logging to identify who is accessing files. If there's a breach, it's helpful to know if anyone has read the exposed files.
  • Never create a bucket that's publicly writable. This could allow anyone on the Internet to use that bucket for storage - including criminals.

Finally, in addition to the security tools mentioned above, Amazon have a product named Zelkova, which is currently in beta. It is designed to analyse how various policies interact and reveal potential holes in the permissions applied. Judging by the talk at AWS re:Invent, it will be a useful tool for unravelling permissions problems.

 

Nigel C

Senior Security Researcher

Leave a comment

Was this blog post helpful?

We need your feedback to improve this content.

Yes No