Cloud Security Best Practices: Protecting Your AWS Infrastructure

E
Engineering LeadershipCodeNex Engineering
July 17, 2025
18 min read
#Security#AWS#IAM#Compliance#CloudSecurity

Cloud Security Best Practices: Protecting Your AWS Infrastructure

Security breaches cost companies an average of $4.35M per incident. This guide covers practical security measures to protect your AWS infrastructure.

The Security Mindset

Security isn't a checkbox—it's a continuous process:

  1. Prevention: Make attacks difficult
  2. Detection: Know when attacks occur
  3. Response: Minimize damage quickly
  4. Recovery: Return to normal operations

Principle #1: Identity and Access Management (IAM)

Never Use Root Account

The Problem: Root account has unlimited access to everything.

The Solution:

# Create an admin user instead
aws iam create-user --user-name admin

# Attach administrator policy
aws iam attach-user-policy \
  --user-name admin \
  --policy-arn arn:aws:iam::aws:policy/AdministratorAccess

# Enable MFA for root account
aws iam enable-mfa-device \
  --user-name root \
  --serial-number arn:aws:iam::123456789012:mfa/root \
  --authentication-code1 123456 \
  --authentication-code2 789012

Use IAM Roles, Not Access Keys

Bad: Hardcoded credentials

// NEVER DO THIS
const AWS = require('aws-sdk');
AWS.config.update({
  accessKeyId: 'AKIAIOSFODNN7EXAMPLE',
  secretAccessKey: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
});

Good: Use IAM roles for EC2/Lambda

// Automatically uses instance role
const AWS = require('aws-sdk');
const s3 = new AWS.S3();

// No credentials in code!
const objects = await s3.listObjects({ Bucket: 'my-bucket' }).promise();

Implement Least Privilege

Bad: Overly permissive policy

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Action": "*",
    "Resource": "*"
  }]
}

Good: Specific permissions

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::my-app-uploads/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:GetItem",
        "dynamodb:PutItem",
        "dynamodb:UpdateItem"
      ],
      "Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/Users"
    }
  ]
}

Use IAM Policy Conditions

Restrict access by IP, time, or MFA:

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Action": "s3:*",
    "Resource": "*",
    "Condition": {
      "IpAddress": {
        "aws:SourceIp": ["203.0.113.0/24"]
      },
      "Bool": {
        "aws:MultiFactorAuthPresent": "true"
      }
    }
  }]
}

Principle #2: Network Security

VPC Design

Proper network segmentation:

┌─────────────────────────────────────┐
│          VPC (10.0.0.0/16)          │
│                                     │
│  ┌──────────────────────────────┐  │
│  │  Public Subnet (10.0.1.0/24) │  │
│  │  - Load Balancer             │  │
│  │  - NAT Gateway               │  │
│  └──────────────────────────────┘  │
│                                     │
│  ┌──────────────────────────────┐  │
│  │ Private Subnet (10.0.2.0/24) │  │
│  │  - Application Servers       │  │
│  │  - No internet access        │  │
│  └──────────────────────────────┘  │
│                                     │
│  ┌──────────────────────────────┐  │
│  │ Database Subnet (10.0.3.0/24)│  │
│  │  - RDS Instances             │  │
│  │  - Isolated from internet    │  │
│  └──────────────────────────────┘  │
└─────────────────────────────────────┘

Terraform implementation:

resource "aws_vpc" "main" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
}

resource "aws_subnet" "public" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.1.0/24"
  map_public_ip_on_launch = true
}

resource "aws_subnet" "private" {
  vpc_id     = aws_vpc.main.id
  cidr_block = "10.0.2.0/24"
}

resource "aws_subnet" "database" {
  vpc_id     = aws_vpc.main.id
  cidr_block = "10.0.3.0/24"
}

# Network ACL for database subnet
resource "aws_network_acl" "database" {
  vpc_id     = aws_vpc.main.id
  subnet_ids = [aws_subnet.database.id]

  # Only allow traffic from private subnet
  ingress {
    protocol   = "tcp"
    rule_no    = 100
    action     = "allow"
    cidr_block = aws_subnet.private.cidr_block
    from_port  = 5432
    to_port    = 5432
  }

  egress {
    protocol   = "tcp"
    rule_no    = 100
    action     = "allow"
    cidr_block = aws_subnet.private.cidr_block
    from_port  = 1024
    to_port    = 65535
  }
}

Security Groups: Defense in Depth

Application tier security group:

resource "aws_security_group" "app" {
  name        = "app-tier"
  description = "Application tier security group"
  vpc_id      = aws_vpc.main.id

  # Allow HTTPS from load balancer only
  ingress {
    from_port       = 443
    to_port         = 443
    protocol        = "tcp"
    security_groups = [aws_security_group.alb.id]
  }

  # Allow all outbound
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Database tier security group:

resource "aws_security_group" "database" {
  name        = "database-tier"
  description = "Database tier security group"
  vpc_id      = aws_vpc.main.id

  # Only allow PostgreSQL from app tier
  ingress {
    from_port       = 5432
    to_port         = 5432
    protocol        = "tcp"
    security_groups = [aws_security_group.app.id]
  }

  # No outbound internet access
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["10.0.0.0/16"]  # VPC only
  }
}

Principle #3: Data Encryption

Encryption at Rest

S3 bucket encryption:

resource "aws_s3_bucket" "data" {
  bucket = "my-secure-data"
}

resource "aws_s3_bucket_server_side_encryption_configuration" "data" {
  bucket = aws_s3_bucket.data.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm     = "aws:kms"
      kms_master_key_id = aws_kms_key.s3.arn
    }
  }
}

# Deny unencrypted uploads
resource "aws_s3_bucket_policy" "data" {
  bucket = aws_s3_bucket.data.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect = "Deny"
      Principal = "*"
      Action = "s3:PutObject"
      Resource = "${aws_s3_bucket.data.arn}/*"
      Condition = {
        StringNotEquals = {
          "s3:x-amz-server-side-encryption" = "aws:kms"
        }
      }
    }]
  })
}

RDS encryption:

resource "aws_db_instance" "main" {
  identifier        = "production-db"
  engine            = "postgres"
  instance_class    = "db.t3.large"

  # Enable encryption
  storage_encrypted = true
  kms_key_id       = aws_kms_key.rds.arn

  # Automated backups are also encrypted
  backup_retention_period = 7
}

Encryption in Transit

Enforce HTTPS only:

resource "aws_lb_listener" "https" {
  load_balancer_arn = aws_lb.main.arn
  port              = "443"
  protocol          = "HTTPS"
  ssl_policy        = "ELBSecurityPolicy-TLS-1-2-2017-01"
  certificate_arn   = aws_acm_certificate.main.arn

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.app.arn
  }
}

# Redirect HTTP to HTTPS
resource "aws_lb_listener" "http" {
  load_balancer_arn = aws_lb.main.arn
  port              = "80"
  protocol          = "HTTP"

  default_action {
    type = "redirect"
    redirect {
      port        = "443"
      protocol    = "HTTPS"
      status_code = "HTTP_301"
    }
  }
}

Principle #4: Secrets Management

Never Hardcode Secrets

Bad:

const dbPassword = 'MyP@ssw0rd123';  // NEVER!

Good: Use AWS Secrets Manager

const AWS = require('aws-sdk');
const secretsManager = new AWS.SecretsManager();

async function getDatabasePassword() {
  const secret = await secretsManager.getSecretValue({
    SecretId: 'prod/database/password'
  }).promise();

  return JSON.parse(secret.SecretString).password;
}

// Use in application
const password = await getDatabasePassword();
const db = await connectToDatabase({
  host: process.env.DB_HOST,
  user: process.env.DB_USER,
  password: password
});

Terraform setup:

resource "aws_secretsmanager_secret" "db_password" {
  name = "prod/database/password"

  rotation_rules {
    automatically_after_days = 30
  }
}

resource "aws_secretsmanager_secret_version" "db_password" {
  secret_id = aws_secretsmanager_secret.db_password.id
  secret_string = jsonencode({
    username = "admin"
    password = random_password.db.result
  })
}

# Auto-rotation Lambda function
resource "aws_lambda_permission" "secrets_manager" {
  statement_id  = "AllowSecretsManager"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.rotate_secret.function_name
  principal     = "secretsmanager.amazonaws.com"
}

Principle #5: Logging and Monitoring

Enable CloudTrail

Track all API calls:

resource "aws_cloudtrail" "main" {
  name                          = "organization-trail"
  s3_bucket_name                = aws_s3_bucket.cloudtrail.id
  include_global_service_events = true
  is_multi_region_trail         = true
  enable_log_file_validation    = true

  # Log to CloudWatch for real-time alerts
  cloud_watch_logs_group_arn = "${aws_cloudwatch_log_group.cloudtrail.arn}:*"
  cloud_watch_logs_role_arn  = aws_iam_role.cloudtrail.arn

  event_selector {
    read_write_type           = "All"
    include_management_events = true

    data_resource {
      type = "AWS::S3::Object"
      values = ["arn:aws:s3:::"]
    }
  }
}

Set Up Security Alerts

Alert on root account usage:

resource "aws_cloudwatch_log_metric_filter" "root_usage" {
  name           = "root-account-usage"
  log_group_name = aws_cloudwatch_log_group.cloudtrail.name

  pattern = '{ $.userIdentity.type = "Root" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != "AwsServiceEvent" }'

  metric_transformation {
    name      = "RootAccountUsage"
    namespace = "Security"
    value     = "1"
  }
}

resource "aws_cloudwatch_metric_alarm" "root_usage" {
  alarm_name          = "root-account-used"
  comparison_operator = "GreaterThanOrEqualToThreshold"
  evaluation_periods  = "1"
  metric_name         = "RootAccountUsage"
  namespace           = "Security"
  period              = "60"
  statistic           = "Sum"
  threshold           = "1"
  alarm_description   = "Root account was used"
  alarm_actions       = [aws_sns_topic.security_alerts.arn]
}

Alert on unauthorized API calls:

resource "aws_cloudwatch_log_metric_filter" "unauthorized_calls" {
  name           = "unauthorized-api-calls"
  log_group_name = aws_cloudwatch_log_group.cloudtrail.name

  pattern = '{ ($.errorCode = "*UnauthorizedOperation") || ($.errorCode = "AccessDenied*") }'

  metric_transformation {
    name      = "UnauthorizedAPICalls"
    namespace = "Security"
    value     = "1"
  }
}

Principle #6: Compliance and Auditing

Enable AWS Config

Track resource configuration changes:

resource "aws_config_configuration_recorder" "main" {
  name     = "config-recorder"
  role_arn = aws_iam_role.config.arn

  recording_group {
    all_supported = true
    include_global_resource_types = true
  }
}

# Check for publicly accessible RDS instances
resource "aws_config_config_rule" "rds_public_access" {
  name = "rds-instance-public-access-check"

  source {
    owner             = "AWS"
    source_identifier = "RDS_INSTANCE_PUBLIC_ACCESS_CHECK"
  }

  depends_on = [aws_config_configuration_recorder.main]
}

# Check for S3 buckets with public access
resource "aws_config_config_rule" "s3_public_read" {
  name = "s3-bucket-public-read-prohibited"

  source {
    owner             = "AWS"
    source_identifier = "S3_BUCKET_PUBLIC_READ_PROHIBITED"
  }

  depends_on = [aws_config_configuration_recorder.main]
}

Use AWS Security Hub

Centralized security findings:

resource "aws_securityhub_account" "main" {}

resource "aws_securityhub_standards_subscription" "cis" {
  standards_arn = "arn:aws:securityhub:::ruleset/cis-aws-foundations-benchmark/v/1.2.0"
  depends_on    = [aws_securityhub_account.main]
}

resource "aws_securityhub_standards_subscription" "pci_dss" {
  standards_arn = "arn:aws:securityhub:us-east-1::standards/pci-dss/v/3.2.1"
  depends_on    = [aws_securityhub_account.main]
}

Principle #7: DDoS Protection

Use AWS Shield and WAF

resource "aws_wafv2_web_acl" "main" {
  name  = "production-waf"
  scope = "REGIONAL"

  default_action {
    allow {}
  }

  # Rate limiting rule
  rule {
    name     = "rate-limit"
    priority = 1

    action {
      block {}
    }

    statement {
      rate_based_statement {
        limit              = 2000
        aggregate_key_type = "IP"
      }
    }

    visibility_config {
      cloudwatch_metrics_enabled = true
      metric_name                = "RateLimitRule"
      sampled_requests_enabled   = true
    }
  }

  # Block known bad IPs
  rule {
    name     = "ip-reputation"
    priority = 2

    override_action {
      none {}
    }

    statement {
      managed_rule_group_statement {
        name        = "AWSManagedRulesAmazonIpReputationList"
        vendor_name = "AWS"
      }
    }

    visibility_config {
      cloudwatch_metrics_enabled = true
      metric_name                = "IPReputationRule"
      sampled_requests_enabled   = true
    }
  }

  visibility_config {
    cloudwatch_metrics_enabled = true
    metric_name                = "ProductionWAF"
    sampled_requests_enabled   = true
  }
}

# Associate WAF with ALB
resource "aws_wafv2_web_acl_association" "main" {
  resource_arn = aws_lb.main.arn
  web_acl_arn  = aws_wafv2_web_acl.main.arn
}

Incident Response Playbook

1. Compromised IAM Credentials

Immediate actions:

# Disable the access key
aws iam update-access-key \
  --user-name compromised-user \
  --access-key-id AKIAIOSFODNN7EXAMPLE \
  --status Inactive

# Review CloudTrail for unauthorized actions
aws cloudtrail lookup-events \
  --lookup-attributes AttributeKey=Username,AttributeValue=compromised-user \
  --start-time 2025-01-01T00:00:00Z

# Delete the access key
aws iam delete-access-key \
  --user-name compromised-user \
  --access-key-id AKIAIOSFODNN7EXAMPLE

# Rotate all other credentials
aws iam create-access-key --user-name legitimate-user

2. Data Breach

Steps:

  1. Enable S3 versioning to preserve evidence
  2. Review CloudTrail for data access patterns
  3. Enable S3 object lock to prevent deletion
  4. Notify affected parties as required by law
  5. Conduct post-mortem analysis

Security Checklist

Infrastructure

✅ VPC with public/private/database subnets ✅ Security groups with least privilege ✅ Network ACLs for additional protection ✅ NAT Gateways in public subnets ✅ VPC Flow Logs enabled

Identity & Access

✅ Root account MFA enabled ✅ Root account not used for daily tasks ✅ IAM users have MFA ✅ IAM roles used instead of access keys ✅ Least privilege IAM policies ✅ Regular access key rotation

Data Protection

✅ S3 buckets encrypted with KMS ✅ RDS instances encrypted ✅ Secrets in Secrets Manager ✅ TLS 1.2+ for all connections ✅ Bucket policies deny unencrypted uploads

Monitoring & Compliance

✅ CloudTrail enabled in all regions ✅ CloudWatch alarms for suspicious activity ✅ AWS Config rules monitoring compliance ✅ Security Hub enabled ✅ GuardDuty threat detection active ✅ Regular security audits scheduled

Application Security

✅ WAF protecting public endpoints ✅ DDoS protection (Shield) ✅ Rate limiting implemented ✅ Input validation on all user data ✅ Regular dependency updates ✅ Container image scanning

Real-World Case Study

Client: Healthcare SaaS (HIPAA compliance required)

Initial State:

  • Public S3 buckets with patient data
  • No encryption at rest
  • Shared admin credentials
  • No audit logging

Security Implementation (3 months):

  1. Implemented VPC with proper segmentation
  2. Encrypted all data at rest with KMS
  3. Enabled CloudTrail + Config
  4. Deployed WAF with rate limiting
  5. Set up automated compliance scanning
  6. Implemented secrets rotation

Results:

  • Achieved HIPAA compliance certification
  • Zero security incidents in 18 months
  • Passed all external security audits
  • Reduced insurance premiums by 40%

Cost: $8,000/month (infrastructure + monitoring)

Recommended Tools

Security Scanning:

  • AWS Inspector: Vulnerability scanning
  • Prowler: AWS security best practices audit
  • ScoutSuite: Multi-cloud security auditing

Secrets Management:

  • AWS Secrets Manager
  • HashiCorp Vault

Compliance:

  • AWS Audit Manager
  • CloudCheckr
  • Dome9 (Check Point)

Conclusion

Cloud security is not optional. Key takeaways:

  • Security is a shared responsibility
  • Start with IAM and network isolation
  • Encrypt everything at rest and in transit
  • Enable comprehensive logging and monitoring
  • Automate compliance checks
  • Have an incident response plan
  • Regular security audits are essential

Need a security audit? Schedule a consultation or download our AWS security checklist.