Cy5

14 AWS S3 Security Best Practices

aws s3 security

Jump to section

Share this post:

Share on twitter
Share on linkedin

S3 or Simple Storage Service : the most popular service in AWS and undoubtedly the most low hanging service from a security misconfiguration perspective. 

Over the years, organisations hosted on public cloud have seen a series of breaches arising due to misconfigured S3 buckets or lack of S3 security controls. Organisations such as WWE, Dow Jones & Co, Verizon Wireless have been impacted by S3 misconfigurations as per this post by Bitdefender.

In this post, we’ll look at S3 security controls in detail and how to configure them.

What is S3?

S3 is a serverless “store and forget” service by AWS that is used for storing data on the cloud. It is highly scalable and enables users to store and retrieve data at any time. It is extremely flexible in it’s use cases. Organisations use S3 for varied purposes such as: hosting static files, storing data or application backups or as their data lake.

As is evident, S3 would be the storage service of choice for organisations running their workload on S3.

Let’s first look at how S3 is accessed.

As any other AWS service, the ways to access S3 buckets are not limited.

If accessing through a browser, the URL format of the s3 bucket might look like:

http://<bucket>.s3.amazonaws.com/
http://s3.amazonaws.com/<bucket>/

If you’re wondering why we used HTTP instead of HTTPS in the above formats, then the next section might help in clearing at least one misconfiguration that you should probably avoid.

Busting some S3 Myths

Imagine the following scenario:

You take a backup of your customer database or code in S3, then someone creating an IAM role or access key that grants a requestor read only access to the S3 bucket, a developer makes that access key public; or worse, an administrator accidentally makes that S3 bucket public.

Whoa! 

While this might sound awfully naive, it’s one of the most popular root causes for breaches in the public cloud space. 

There’s a common misconception around S3 buckets:

S3 buckets CANNOT BE be compromised if your bucket is private

They most definitely can!

Another one:

S3 buckets are secure by design

No, they are not!

Getting back to the url formats we mentioned earlier.

If we use just HTTP from our browser to access an s3 bucket which is public, we will be able to view the content of the bucket without Secure Transport (SSL/TLS) applied on the bucket.

The unencrypted traffic is vulnerable to man-in-the-middle attacks that can steal or modify data in transit.

Let’s see how such instances can be avoided, or in case they occur, what can be done to contain the threat so as to cause minimal impact. 

We’ll break this blog post into three categories – preventive controls, detection controls and general best practices.

Preventive Controls for S3 Security

So let’s start with some basic preventive controls that are a MUST to boost S3 security.

Note: The following configurations should be enabled at bucket level to apply the same to all objects in the bucket. 

Public Access

Needless to say, enabling this setting opens up an S3 bucket to the internet, which should ONLY be done in cases such as static website objects, which in turn should be served via CloudFront and not S3 directly. DO NOT open up S3 buckets to the internet unless absolutely necessary.

Note: A public bucket doesn’t always mean that you can view its objects from the browser.

For example – we have a bucket db-tester. We were able to list its objects via aws cli as shown below:

But when we tried to access it from the browser, it gave a 403 error.

But this bucket is still considered to be public!

This is simply backed by the fact that it is publicly accessible to anyone with an AWS account.

You can configure an Access Control List (ACL) to make sure your bucket doesn’t get public but we recommend using the configuration shown in the image:

This also blocks anyone to make changes to ACLs that could make the bucket public.

Cross-account Access 

There will be a need to grant another AWS account access to one or more of your S3 buckets. This could be within an organisation or to another third party.

This configuration needs to be done carefully and one should avoid granting access that’s too permissive.

Like any other AWS permissions, the cross-account access must be very specific so that the other account doesn’t have access to the whole bucket unless absolutely necessary.

For instance, in case there is a corporate network that requires access, one could use the below S3 bucket policy to grant access only to specific IP addresses with an action “GetObject”:

{
    "Version": "2012-10-17",
    "Id": "S3PolicyId1",
    "Statement": [
        {
            "Sid": "IPAllow",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject"
             ],
            "Resource": "arn:aws:s3:::<bucket>/*",
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": [
                        "192.168.1.1",
                        "192.168.1.2",
                        "192.168.1.3"
                    ]
                }
            }
        }
    ]
}

Encryption in Transit

Getting back to the URL formats we mentioned earlier.

If we use just HTTP from our browser to access an S3 bucket which is public, we will be able to view the content of the bucket unless Secure Transport (SSL/TLS) is applied on the bucket.

The unencrypted traffic is vulnerable to man-in-the-middle attacks that can steal or modify data in transit.

The following bucket policy can be used to enforce SSL/TLS by default:

{
  "Id": "ExamplePolicy",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowSSLRequestsOnly",
      "Action": "s3:*",
      "Effect": "Deny",
      "Resource": [
        "arn:aws:s3:::<bucket>",
        "arn:aws:s3:::<bucket>/*"
      ],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      },
      "Principal": "*"
    }
  ]
}

Granting access to non-AWS services via Access Keys

There will be instances (especially in cases of hybrid infrastructure deployments) where programmatic access to S3 buckets cannot be granted via IAM roles and access keys are the only option available. Before doing this, make sure your S3 bucket policy restricts access to identified public IP addresses, else any leakage of that access key (via code etc) will present your S3 bucket to the adversaries on a platter! 

The bucket policy similar to the one in the previous “Cross-account Access” section can be used here.

Versioning

It’s not uncommon for S3 bucket content to get deleted accidentally, happens all the time. Ensure your critical S3 buckets have versioning enabled which would allow you to revert to a previous version in a few clicks. 

This configuration acts as a way to backup the objects of your bucket. It’s pretty simple configuration:

Bucket Encryption

In case a bucket contains sensitive information, consider using bucket encryption to ensure the contents of the bucket remain encrypted even if someone gets anonymous access to the S3 bucket.

This can be done using S3-managed keys or KMS keys(CMKs or AWS managed keys).

MFA Delete

For certain critical buckets such as ones that contain CloudTrail logs, VPC flow logs etc, consider enabling the MFA on delete functionality which ensures there’s a two step authentication requirement while a deletion process is invoked on these buckets; this is often a compliance requirement too.

The MFA delete setting comes under the versioning configuration of your bucket, we saw that in the previous section. In order to enable MFA delete, you must enable versioning first and not vice-versa. 

Cloud Security Posture Monitoring (CSPM) tools run automated checks to verify the above settings in customer environments. Cy5’s  CSPM offering runs 300+ checks which include ones that are specific to S3 as well.

Also, do check out our blog on open source cloud security tools that can help detect mis-configurations such as the above.

Detection Controls for S3 Security

S3 Bucket Policy Changes

Early detection is key to effective response! It is important to detect when critical changes are done to S3 buckets in order to correct them timely. The bucket policy changes might be done to change SSL settings, grant cross-account access etc. which we talked about earlier and thus, makes this piece so important. 

Consider monitoring CloudTrail logs for changes to S3 bucket policies or integrate CloudTrail with your SIEM platform to generate alerts when such changes are done. 

S3 Access Logging

Helps investigate potential security incidents and should be enabled for public or sensitive S3 buckets.

This configuration basically enables recording of each and every call made to your bucket, hence, proving the need for it. This can be helpful in scenarios of Threat Hunting where you want to look for IOCs in case of a potential data exfiltration or even locating the activities of an adversary in your infrastructure.

This can surely be compared as logging your Apache or Nginx web server access logs.

GuardDuty Inspection

Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.

Consider enabling S3 inspection in your GuardDuty service to alert on unusual access patterns that provide an early warning signal of compromise. S3 inspection is not enabled by default.

It can monitor all S3 events (access and configurations) and detect any unusual or malicious activity.

Here’s a thorough article from AWS might help for this:

Access Key Mis-use

Detection controls for something like compromised Access Keys is a must as these keys are one of the most common methods of authenticating in AWS. 

One must make sure that the best practices for access keys should be followed because if the existing keys get compromised or an attacker creates new ones, that’s all just bad news.

Carefully monitor access key usage and setup alerts when they are used from unusual locations or accessed at weird times, which again is an early warning signal.

Best Practices

Bucket Classification

Might sound a little old school, but data classification goes a long way to not only ensure that your sensitive assets are monitored closely but also optimises your incident response process. 

For example, you would be a LOT more worried if a bucket with credit card data is made public as opposed to a bucket with static HTML files! 

Consider data discovery tools such as AWS Macie to classify your S3 buckets such that threat detection and incident response pipelines are optimised.

A better idea of what’s in your bucket goes a long way!

Naming convention

Follow a simple yet interpretable naming convention for your AWS resources, including S3. Just like bucket classification, naming conventions help prioritise actions based on criticality.

Tagging

Include data classification attributes, business unit details, ownership information wherever possible in bucket tags, again with the aim to accelerate incident response times.

For AWS resources, a tag is just a key-value pair BUT can be of use in multiple ways.

You can specify who owns the resource, is it just a temp or a production resource etc.

This gives a sense of visibility to DevOps teams or security analysts while they work on operations or investigations.

Wrapping up

Data is the new gold!

Most organisations use S3 in some shape or form to store data. 

Embed S3 security practices in your preventive and detective security program, use tools to automate configuration checks to ensure S3 misconfigurations do not cost you