AWS Security Misconfigurations: Common Mistakes That Lead to Breaches

Most cloud breaches don’t involve sophisticated exploits. They exploit misconfigurations — small mistakes that expose massive amounts of data.

This guide covers the most common AWS security mistakes and how to fix them.

The AWS Shared Responsibility Model

┌─────────────────────────────────────────────────────────┐
│                    YOUR Responsibility                   │
│  ┌─────────────────────────────────────────────────┐   │
│  │  Data, Identity, Applications, OS, Network      │   │
│  │  Configuration, Encryption, Access Control      │   │
│  └─────────────────────────────────────────────────┘   │
├─────────────────────────────────────────────────────────┤
│                    AWS Responsibility                    │
│  ┌─────────────────────────────────────────────────┐   │
│  │  Hardware, Global Infrastructure, Managed       │   │
│  │  Services, Physical Security, Networking        │   │
│  └─────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────┘

AWS secures the cloud. You secure what you put IN the cloud.

Misconfiguration #1: Public S3 Buckets

The breach: Capital One, Twitch, dozens of government agencies

The Problem

# Check if bucket is public
aws s3api get-bucket-acl --bucket my-bucket
# Look for: "Grantee": {"URI": "http://acs.amazonaws.com/groups/global/AllUsers"}

# Or the policy allows public access
aws s3api get-bucket-policy --bucket my-bucket
# Look for: "Principal": "*"

Real Attack Scenario

# Attacker scans for buckets
# Common patterns: company-name, company-backup, company-logs

# Check if listable
aws s3 ls s3://company-backup --no-sign-request

# Download everything
aws s3 sync s3://company-backup ./loot --no-sign-request

The Fix

// Block all public access at account level
{
    "BlockPublicAcls": true,
    "IgnorePublicAcls": true,
    "BlockPublicPolicy": true,
    "RestrictPublicBuckets": true
}
# Apply to all buckets in account
aws s3control put-public-access-block \
    --account-id $ACCOUNT_ID \
    --public-access-block-configuration \
    "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"

Detection

# Find public buckets
aws s3api list-buckets --query 'Buckets[*].Name' --output text | \
while read bucket; do
    acl=$(aws s3api get-bucket-acl --bucket $bucket 2>/dev/null)
    if echo "$acl" | grep -q "AllUsers\|AuthenticatedUsers"; then
        echo "PUBLIC: $bucket"
    fi
done

Misconfiguration #2: Overprivileged IAM Roles

The Problem

// The "just make it work" policy
{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": "*",
        "Resource": "*"
    }]
}
// Congratulations, your Lambda can delete your entire AWS account

Attack Scenario: Privilege Escalation

# Attacker compromises Lambda with excessive permissions
# Step 1: Check what I can do
import boto3
iam = boto3.client('iam')

# Create a new admin user
iam.create_user(UserName='backdoor-admin')
iam.attach_user_policy(
    UserName='backdoor-admin',
    PolicyArn='arn:aws:iam::aws:policy/AdministratorAccess'
)
iam.create_access_key(UserName='backdoor-admin')
# Game over

The Fix: Least Privilege

// Lambda that only needs to read from one S3 bucket
// and write to one DynamoDB table
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": ["s3:GetObject"],
            "Resource": "arn:aws:s3:::my-input-bucket/*"
        },
        {
            "Effect": "Allow",
            "Action": ["dynamodb:PutItem"],
            "Resource": "arn:aws:dynamodb:us-east-1:123456789:table/my-table"
        }
    ]
}

Detection: Find Overprivileged Roles

# Find roles with admin access
aws iam list-roles --query 'Roles[*].RoleName' --output text | \
while read role; do
    policies=$(aws iam list-attached-role-policies --role-name $role --query 'AttachedPolicies[*].PolicyArn' --output text)
    if echo "$policies" | grep -q "AdministratorAccess"; then
        echo "ADMIN ROLE: $role"
    fi
done

Misconfiguration #3: Exposed EC2 Metadata Service

The Problem

The EC2 metadata service (169.254.169.254) provides instance credentials. If your app is vulnerable to SSRF, attackers can steal these credentials.

Attack Scenario: SSRF to Credential Theft

# Attacker finds SSRF in web app
curl "https://vulnerable-app.com/fetch?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/"
# Returns: my-ec2-role

curl "https://vulnerable-app.com/fetch?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/my-ec2-role"
# Returns:
{
    "AccessKeyId": "ASIA...",
    "SecretAccessKey": "...",
    "Token": "...",
    "Expiration": "2025-08-12T20:00:00Z"
}

The Fix: IMDSv2 (Require Token)

# Require IMDSv2 for all instances
aws ec2 modify-instance-metadata-options \
    --instance-id i-1234567890abcdef0 \
    --http-tokens required \
    --http-endpoint enabled

# IMDSv2 requires a token, blocking simple SSRF
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" \
    -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")

curl -H "X-aws-ec2-metadata-token: $TOKEN" \
    http://169.254.169.254/latest/meta-data/

Enforce at Launch

# CloudFormation / Terraform
Resources:
  MyInstance:
    Type: AWS::EC2::Instance
    Properties:
      MetadataOptions:
        HttpTokens: required
        HttpPutResponseHopLimit: 1
        HttpEndpoint: enabled

Misconfiguration #4: Security Groups as Firewalls

The Problem

# "Temporary" rule that's been there for 3 years
aws ec2 describe-security-groups --query \
    'SecurityGroups[*].IpPermissions[?IpRanges[?CidrIp==`0.0.0.0/0`]]'

Common Dangerous Rules

PortRiskWhy It’s Bad
22SSH to worldBrute force, key theft
3389RDP to worldBlueKeep, credential spray
3306MySQL to worldData theft, SQLi
27017MongoDB to worldNo auth by default
6379Redis to worldNo auth by default
9200ElasticsearchData exposure, RCE

The Fix

// NEVER
{
    "IpProtocol": "tcp",
    "FromPort": 22,
    "ToPort": 22,
    "IpRanges": [{"CidrIp": "0.0.0.0/0"}]
}

// ALWAYS restrict to known IPs or use bastion/VPN
{
    "IpProtocol": "tcp",
    "FromPort": 22,
    "ToPort": 22,
    "IpRanges": [{"CidrIp": "10.0.0.0/8"}]  // VPN range only
}

Detection

# Find security groups with 0.0.0.0/0
aws ec2 describe-security-groups --query \
    'SecurityGroups[?IpPermissions[?IpRanges[?CidrIp==`0.0.0.0/0`]]].{Name:GroupName,ID:GroupId}' \
    --output table

Misconfiguration #5: Unencrypted Data

At Rest

# Find unencrypted EBS volumes
aws ec2 describe-volumes --query \
    'Volumes[?Encrypted==`false`].{ID:VolumeId,Size:Size}' \
    --output table

# Find unencrypted RDS instances
aws rds describe-db-instances --query \
    'DBInstances[?StorageEncrypted==`false`].DBInstanceIdentifier'

# Find unencrypted S3 buckets
aws s3api list-buckets --query 'Buckets[*].Name' --output text | \
while read bucket; do
    enc=$(aws s3api get-bucket-encryption --bucket $bucket 2>&1)
    if echo "$enc" | grep -q "ServerSideEncryptionConfigurationNotFoundError"; then
        echo "UNENCRYPTED: $bucket"
    fi
done

The Fix

# Enable default encryption for S3 bucket
aws s3api put-bucket-encryption --bucket my-bucket \
    --server-side-encryption-configuration '{
        "Rules": [{
            "ApplyServerSideEncryptionByDefault": {
                "SSEAlgorithm": "aws:kms",
                "KMSMasterKeyID": "alias/my-key"
            }
        }]
    }'

# Enable encryption for new EBS volumes by default
aws ec2 enable-ebs-encryption-by-default

Misconfiguration #6: CloudTrail Disabled or Incomplete

The Problem

No CloudTrail = No visibility = No detection

# Check if CloudTrail is enabled
aws cloudtrail describe-trails
# Empty response = you're blind

The Fix

# Create trail logging to S3
aws cloudtrail create-trail \
    --name my-audit-trail \
    --s3-bucket-name my-cloudtrail-bucket \
    --is-multi-region-trail \
    --enable-log-file-validation

# Start logging
aws cloudtrail start-logging --name my-audit-trail

# Enable insights (anomaly detection)
aws cloudtrail put-insight-selectors \
    --trail-name my-audit-trail \
    --insight-selectors '[{"InsightType": "ApiCallRateInsight"}]'

Security Audit Checklist

## Identity & Access
□ No root account access keys
□ MFA on root account
□ MFA on all IAM users
□ No * permissions in policies
□ Regular access key rotation
□ Remove unused IAM users/roles

## Network
□ No 0.0.0.0/0 ingress rules (except 80/443 for public LBs)
□ VPC Flow Logs enabled
□ Default VPC not used for production
□ Private subnets for databases

## Data Protection
□ S3 public access blocked at account level
□ S3 buckets encrypted
□ EBS volumes encrypted
□ RDS instances encrypted
□ No secrets in environment variables

## Logging & Monitoring
□ CloudTrail enabled in all regions
□ CloudTrail log validation enabled
□ GuardDuty enabled
□ Config rules for compliance

## Compute
□ IMDSv2 required on all EC2
□ SSM Session Manager instead of SSH
□ No public EC2 instances with admin roles

Automated Scanning Tools

# Prowler - AWS Security Assessment
pip install prowler
prowler aws

# ScoutSuite - Multi-cloud security auditing
pip install scoutsuite
scout aws

# CloudSploit - Cloud Security Scans
# https://github.com/aquasecurity/cloudsploit

Conclusion

AWS security isn’t about advanced threats — it’s about the basics:

  1. Block public S3 at the account level
  2. Least privilege IAM — no * actions
  3. Require IMDSv2 — block SSRF attacks
  4. Encrypt everything — at rest and in transit
  5. Enable CloudTrail — you can’t protect what you can’t see

The attackers aren’t looking for zero-days. They’re looking for the S3 bucket you forgot about in 2019.


Related: Docker Security Best Practices