Automated AWS Security Auditing with Prowler: A Blueprint for Small Teams
You don't need a six-figure security budget to audit your AWS account like the big players. Prowler is free, open-source, and runs 584 security checks in minutes. Pair it with an automated notification pipeline, and you have a production-grade security auditing system for under a dollar a month.
If you're running workloads on AWS with a small team, you've probably had the same nagging thought I have: "Are we actually secure, or are we just lucky?"
Most small organizations don't have a dedicated security team. The people deploying infrastructure are the same people writing application code, managing CI/CD, and answering support tickets. Security audits get pushed to "next sprint" indefinitely.
That's exactly the gap Prowler fills. It's an open-source cloud security tool that scans your entire AWS account against hundreds of security checks — covering everything from S3 bucket policies to IAM misconfigurations to encryption settings — and tells you exactly what's wrong and how to fix it.
In this post, I'll walk you through setting up Prowler, running your first audit, automating it on a schedule, and — this is the part that ties everything together — piping the results into a real-time notification system so your team actually sees and acts on findings instead of letting reports collect dust in an S3 bucket.
What Is Prowler?
Prowler is the most widely used open-source cloud security platform. Originally built for AWS, it now supports Azure, GCP, Kubernetes, and more — but AWS remains its strongest suit with 584 security checks out of the box.
Here's what makes it practical for small teams:
It maps to real compliance frameworks. Every check is tagged against one or more standards — CIS AWS Benchmarks, NIST 800-53, PCI-DSS, GDPR, HIPAA, SOC 2, and others. So when an auditor asks "are you CIS compliant?", you have an actual answer.
It tells you how to fix things. Each finding comes with remediation guidance. Not just "this is wrong," but "here's the AWS CLI command or console step to fix it."
It runs in minutes. A full scan of a typical small-org AWS account takes 5-15 minutes depending on the number of resources. No agents, no infrastructure to maintain.
It's free. Prowler is fully open-source under the Apache 2.0 license. There's a paid SaaS version with a dashboard, but the CLI does everything we need.
Prerequisites
Before we start, make sure you have:
- Python 3.9+ installed
- AWS CLI configured with credentials (
aws configure) - IAM permissions: Prowler needs read-only access. We'll create a dedicated role in Step 1.
bash
# Verify your AWS identity
aws sts get-caller-identityStep 1: Create a Dedicated IAM Role for Prowler
Don't run Prowler with your personal credentials. Create a dedicated role with the minimum permissions it needs:
hcl
# prowler-iam.tf
resource "aws_iam_role" "prowler" {
name = "ProwlerSecurityAuditRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
# For Lambda/Fargate execution:
Service = ["lambda.amazonaws.com", "ecs-tasks.amazonaws.com"]
# For manual runs from your machine, add your account:
# AWS = "arn:aws:iam::YOUR_ACCOUNT_ID:root"
}
}]
})
}
resource "aws_iam_role_policy_attachment" "prowler_security_audit" {
role = aws_iam_role.prowler.name
policy_arn = "arn:aws:iam::aws:policy/SecurityAudit"
}
resource "aws_iam_role_policy_attachment" "prowler_view_only" {
role = aws_iam_role.prowler.name
policy_arn = "arn:aws:iam::aws:policy/ViewOnlyAccess"
}
# Additional permissions for Security Hub integration
resource "aws_iam_role_policy" "prowler_security_hub" {
name = "ProwlerSecurityHubAccess"
role = aws_iam_role.prowler.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Action = [
"securityhub:BatchImportFindings",
"securityhub:GetFindings"
]
Resource = "*"
}]
})
}These are read-only policies — Prowler never modifies your infrastructure. The SecurityAudit and ViewOnlyAccess managed policies cover all 584 checks.
For manual runs, assume the role:
bash
aws sts assume-role \
--role-arn arn:aws:iam::YOUR_ACCOUNT_ID:role/ProwlerSecurityAuditRole \
--role-session-name prowler-scan
# Or run Prowler with the role directly
prowler aws -R arn:aws:iam::YOUR_ACCOUNT_ID:role/ProwlerSecurityAuditRoleStep 2: Install Prowler
The simplest installation is via pip:
bash
pip install prowler
prowler -vYou should see something like Prowler 5.x.x. If you prefer containerized execution:
bash
docker run -ti --rm \
--name prowler \
--env AWS_ACCESS_KEY_ID \
--env AWS_SECRET_ACCESS_KEY \
--env AWS_SESSION_TOKEN \
-v $(pwd)/output:/home/prowler/output \
prowlercloud/prowler:latestStep 3: Run Your First Scan
Start with a full scan of your account:
bash
prowler awsThat's it. Prowler will enumerate your AWS resources and run all 584 checks. You'll see real-time output showing PASS, FAIL, and MANUAL findings.
For a more targeted first run, scan specific services:
bash
# Scan only S3 and IAM — the two most common misconfiguration sources
prowler aws -s s3 iamOr run against a specific compliance framework:
bash
# CIS AWS Foundations Benchmark
prowler aws --compliance cis_2.0_aws
# NIST 800-53
prowler aws --compliance nist_800_53_revision_5_awsUnderstanding the Output
Prowler generates two files by default:
- CSV — machine-readable, great for filtering and processing
- JSON-OCSF — Open Cybersecurity Schema Framework format for interoperability
Add an HTML report for a nice visual overview:
bash
prowler aws -M csv json-ocsf htmlThe HTML report is self-contained and looks surprisingly good — it's the easiest way to share results with non-technical stakeholders.
Output location: By default, findings land in ./output/ in your working directory.
Step 4: Read and Prioritize Findings
Your first scan will almost certainly return a wall of failures. Don't panic — this is normal. Here's how to prioritize:
Start with Critical and High severity. Prowler categorizes findings by severity. Filter the CSV:
bash
# Count findings by severity
awk -F',' '{print $8}' output/prowler-output-*.csv | sort | uniq -c | sort -rnFocus on these high-impact categories first:
- Public S3 buckets — Data exposure risk. Fix immediately.
- IAM users without MFA — Account compromise risk.
- Unencrypted EBS volumes and RDS instances — Compliance violation.
- Overly permissive security groups — 0.0.0.0/0 on non-web ports.
- Root account usage — Should never be used for daily operations.
- Aged access keys — Keys older than 90 days should be rotated.
Mute accepted risks. Not every finding requires action. Some are intentional design decisions. Prowler supports an allowlist to mute known exceptions so future scans stay clean:
bash
prowler aws --mutelist-file allowlist.yamlExample allowlist.yaml:
yaml
Mutelist:
Accounts:
"123456789012":
Checks:
"s3_bucket_public_access":
Regions:
- "us-east-1"
Resources:
- "my-public-website-bucket"
Reason: "Intentionally public — hosts static website"Step 5: Automate on a Schedule
A one-time scan is useful. A weekly automated scan is powerful. Here are three approaches, ordered by complexity:
Option A: Cron Job on an EC2 Instance (Simplest)
If you already have a bastion host or management instance:
bash
# Add to crontab — runs every Monday at 2 AM UTC
0 2 * * 1 /usr/local/bin/prowler aws -M csv json-ocsf -o /opt/prowler-reports/ 2>&1 | logger -t prowlerOption B: AWS Fargate Task (Serverless)
For teams that don't want to manage instances:
hcl
# EventBridge rule triggers a Fargate task weekly
resource "aws_cloudwatch_event_rule" "prowler_schedule" {
name = "prowler-weekly-scan"
schedule_expression = "rate(7 days)"
}AWS provides an official Terraform module for this at aws-samples/aws-tf-prowler-fargate.
Option C: Lambda with Container Image (Cost-Optimized)
For small accounts where scans complete within Lambda's 15-minute timeout:
hcl
resource "aws_lambda_function" "prowler_scan" {
function_name = "prowler-security-scan"
package_type = "Image"
image_uri = "${aws_ecr_repository.prowler.repository_url}:latest"
timeout = 900
memory_size = 1024
environment {
variables = {
PROWLER_ARGS = "-s s3 iam ec2 rds -M csv json-ocsf"
S3_BUCKET = aws_s3_bucket.prowler_reports.id
OUTPUT_PREFIX = "prowler-reports/"
}
}
}Whichever option you choose, store reports in S3 with encryption, access logging, and a lifecycle policy. Prowler reports contain sensitive resource metadata — treat them like security data:
hcl
resource "aws_s3_bucket" "prowler_reports" {
bucket = "your-org-prowler-reports"
}
resource "aws_s3_bucket_server_side_encryption_configuration" "reports" {
bucket = aws_s3_bucket.prowler_reports.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
}
}
resource "aws_s3_bucket_public_access_block" "reports" {
bucket = aws_s3_bucket.prowler_reports.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_s3_bucket_versioning" "reports" {
bucket = aws_s3_bucket.prowler_reports.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_lifecycle_configuration" "reports" {
bucket = aws_s3_bucket.prowler_reports.id
rule {
id = "archive-old-reports"
status = "Enabled"
transition {
days = 90
storage_class = "GLACIER"
}
expiration {
days = 365
}
}
}This gives you KMS encryption at rest, public access blocked, versioning for audit trails, and automatic archival to Glacier after 90 days.
Step 6: Send Findings to AWS Security Hub
If you're using Security Hub (and you should be — it's part of the free tier for the first 30 days, then pay-per-finding), Prowler integrates natively:
1. Enable the Prowler integration in Security Hub:
Go to AWS Console → Security Hub → Integrations → Search "Prowler" → Accept Findings.
2. Send findings from the CLI:
bash
prowler aws -M json-asff --security-hub --send-sh-only-failsThe --send-sh-only-fails flag is important — it sends only failed checks to Security Hub, keeping your costs down and your dashboard focused on what needs fixing.
Step 7: Real-Time Notifications with AWS Security Notification System
Here's where it all comes together. Running Prowler on a schedule is great, but if nobody looks at the reports, they're useless. You need findings to arrive where your team already lives — Slack.
This is exactly what the AWS Security Notification System does. It's a serverless, event-driven pipeline that monitors AWS security events and delivers rich Slack notifications in real-time.
How It Works
- Prowler runs on a schedule and sends failed findings to Security Hub.
- EventBridge listens for Security Hub finding events (with 40+ pre-configured patterns).
- Lambda picks up the event via SQS (with a dead letter queue for reliability).
- Slack receives a rich, formatted notification with severity, resource details, and remediation guidance.
Architecture Highlights
The notification system is purpose-built for small teams:
- Cost: ~$0.74/month for 10,000 events. Not a typo.
- Setup time: 5-10 minutes using CloudFormation or Terraform.
- Reliability: SQS with DLQ ensures no findings get dropped. Exponential backoff retries on Slack delivery failures.
- Smart formatting: Messages include risk indicators, severity badges, and resource context so your team can triage at a glance.
- Rate limiting: 30 messages/minute by default (configurable) to avoid Slack channel flooding.
- Message aggregation: Groups similar findings so you get one actionable alert instead of 50 identical ones.
Deploying the Notification System
The fastest path is CloudFormation:
bash
git clone https://github.com/Parthasarathi7722/aws-security-notification.git
cd aws-security-notificationDeploy using the SAM template:
bash
sam deploy --guidedOr using Terraform:
bash
cd terraform/
terraform init
terraform plan -var="slack_webhook_url=https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
terraform applyYou'll need a Slack webhook URL — create one at api.slack.com/messaging/webhooks and point it at your #security-alerts channel.
What the Notifications Look Like
When Prowler flags a critical finding and it flows through the pipeline, your team gets a Slack message like:
🔴 CRITICAL | Security Hub Finding
Resource: arn:aws:s3:::production-data-bucket
Check: S3 bucket has public read access enabled
Severity: CRITICAL
Region: us-east-1
Account: 123456789012
Remediation: Remove public access by enabling S3 Block Public Access
at the bucket level.No more "I'll check the report later." The finding is in Slack, in front of the right people, within seconds.
Putting It All Together: The Full Architecture
Here's the complete setup for a small team:
Weekly Cron/EventBridge
│
▼
┌─────────────────┐
│ Prowler Scan │ (Fargate or Lambda)
│ 584 AWS Checks │
└────────┬────────┘
│
┌────┴────┐
│ │
▼ ▼
┌────────┐ ┌──────────────┐
│ S3 │ │ Security Hub │
│ Reports│ │ (Findings) │
└────────┘ └──────┬───────┘
│
▼
┌──────────────┐
│ EventBridge │ (40+ event patterns)
└──────┬───────┘
│
▼
┌──────────────┐
│ SQS + DLQ │ (reliable delivery)
└──────┬───────┘
│
▼
┌──────────────┐
│ Lambda │ (Python 3.11)
│ Formatter │
└──────┬───────┘
│
▼
┌──────────────┐
│ Slack │ (rich notifications)
│ #security │
└──────────────┘Total monthly cost: Prowler itself is free. Fargate task running weekly ~$2-5/month. Notification pipeline ~$0.74/month. S3 storage for reports: pennies. You're looking at under $10/month for a complete security auditing and alerting system.
Recommended Scan Cadence
Not every check needs to run at the same frequency. Here's what I recommend for small teams:
| Frequency | What to Scan | Why |
|---|---|---|
| Daily | IAM (access keys, MFA, root usage) | Identity is the most exploited attack vector |
| Weekly | Full scan (all 584 checks) | Catches configuration drift |
| On every deploy | S3, Security Groups, RDS, ECS | Prevents misconfigs from reaching production |
| Monthly | Full compliance report (CIS/NIST) | Audit trail and trend tracking |
A Note on Multi-Account Scanning
If your organization has multiple AWS accounts (production, staging, dev — as it should), Prowler handles this natively. You can scan across accounts by assuming roles:
bash
# Scan a different account by assuming a role in it
prowler aws -R arn:aws:iam::TARGET_ACCOUNT_ID:role/ProwlerSecurityAuditRoleDeploy the ProwlerSecurityAuditRole (from Step 1) into each account using CloudFormation StackSets or Terraform workspaces, then loop through accounts in your scheduled scan. The notification system will tag each finding with the source account ID, so your team knows exactly which account needs attention.
Troubleshooting Common Issues
Prowler scan times out in Lambda: Lambda has a 15-minute hard limit. If your account has thousands of resources, switch to Fargate or scope your Lambda scans to specific services (-s s3 iam ec2).
Too many findings flooding Slack: Start by scanning only Critical and High severity findings: prowler aws --severity critical high. As your team remediates those, expand to Medium.
Stale credentials during scheduled scans: If using an IAM role (recommended), temporary credentials are automatically refreshed. Avoid long-lived access keys for automated scanning.
Prowler can't access certain services: Ensure the SecurityAudit policy is attached. Some newer AWS services may need additional permissions — check the Prowler docs for service-specific requirements.
What's Next
This post covered the auditing and alerting pipeline. In future posts, I'll go deeper on:
- Cloud Custodian rules that auto-remediate the most common Prowler findings (so the fix happens before the alert).
- Building a DevSecOps pipeline from scratch with GitHub Actions — baking security into every commit.
- Generating SBOMs with Syft — because knowing what's inside your containers is just as important as scanning your cloud config.
If you're running a small team on AWS and want to stop guessing about your security posture, start with Prowler + the notification system. It takes an afternoon to set up and costs less than a cup of coffee per month.
Resources:
- Prowler GitHub — 584 AWS security checks, open-source
- Prowler Documentation — Installation, configuration, and check reference
- AWS Security Notification System — Serverless Slack alerting for AWS security events
- AWS Fargate + Prowler Terraform Module — Official AWS sample for scheduled scans
Originally published on Chaos to Control — DevSecOps blueprints for small teams.