Skip to main content
Menu
Home WhoAmI Stack Insights Blog Contact
/user/KayD @ karandeepsingh.ca :~$ cat a-look-at-the-security-features-offered-by-aws-and-best-practices-for-securing-your-cloud-resources.md

AWS Security Audit: From AWS CLI to a Go Security Scanner

Karandeep Singh
• 33 minutes read

Summary

Master AWS security auditing with CLI commands and Go. From checking IAM users and security groups to building a complete scanner that finds and reports misconfigurations.

Most AWS accounts have security problems. Old access keys, security groups open to the internet, S3 buckets with bad policies. The question is whether you find them before someone else does.

This article starts with AWS CLI commands to audit your account manually. Then we build a Go program that runs all those checks automatically. Each step follows the same pattern: run the CLI command first, understand what to look for, then write Go code that does the same thing.

We will hit real bugs along the way. Pagination issues, missing IPv6 checks, wrong error handling. Each bug appears first with wrong output, then gets fixed.

Prerequisites

You need these installed and working:

  • Go 1.21 or later
  • AWS CLI v2 configured with credentials (aws sts get-caller-identity should work)
  • An AWS account with at least read permissions for IAM, EC2, S3, and CloudTrail

Set up the Go project:

mkdir awsaudit && cd awsaudit
go mod init awsaudit
go get github.com/aws/aws-sdk-go-v2/config
go get github.com/aws/aws-sdk-go-v2/service/iam
go get github.com/aws/aws-sdk-go-v2/service/ec2
go get github.com/aws/aws-sdk-go-v2/service/s3
go get github.com/aws/aws-sdk-go-v2/service/cloudtrail

Verify AWS access:

aws sts get-caller-identity
{
    "UserId": "AIDACKCEVSQ6C2EXAMPLE",
    "Account": "123456789012",
    "Arn": "arn:aws:iam::123456789012:user/auditor"
}

If that works, your credentials are set up correctly.

Step 1: Auditing IAM Users

IAM users are the first thing to check. Old users with static access keys and no MFA are the most common way AWS accounts get compromised.

CLI: List All Users

aws iam list-users --output table
-----------------------------------------------------------------
|                           ListUsers                           |
+---------------------------------------------------------------+
||                            Users                            ||
|+------------+----------------------------+-------------------+|
||  UserName  |       CreateDate           |     UserId        ||
|+------------+----------------------------+-------------------+|
||  admin     |  2022-01-15T08:30:00+00:00 |  AIDA...EXAMPLE1 ||
||  john      |  2023-03-20T14:15:00+00:00 |  AIDA...EXAMPLE2 ||
||  deploy-ci |  2023-06-01T09:00:00+00:00 |  AIDA...EXAMPLE3 ||
||  sarah     |  2024-01-10T11:45:00+00:00 |  AIDA...EXAMPLE4 ||
|+------------+----------------------------+-------------------+|

Four users. Now check each one for access keys, console passwords, and MFA.

CLI: Check Access Keys

aws iam list-access-keys --user-name john
{
    "AccessKeyMetadata": [
        {
            "UserName": "john",
            "AccessKeyId": "AKIA...EXAMPLE",
            "Status": "Active",
            "CreateDate": "2023-03-20T14:20:00+00:00"
        }
    ]
}

John has one active access key. It was created when his account was made. That key is about a year old.

CLI: Check Console Access

aws iam get-login-profile --user-name john
{
    "LoginProfile": {
        "UserName": "john",
        "CreateDate": "2023-03-20T14:15:00+00:00",
        "PasswordResetRequired": false
    }
}

John has console access. If a user does not have a console password, this command returns a NoSuchEntity error. That is normal and means the user is API-only.

CLI: Check MFA

aws iam list-mfa-devices --user-name john
{
    "MFADevices": []
}

Empty list. John has console access but no MFA. That is a security risk. Anyone with his password can log in without a second factor.

Go: IAM User Audit

Now write the Go code that checks every user at once.

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/iam"
)

type IAMFinding struct {
	UserName    string
	HasConsole  bool
	HasMFA      bool
	AccessKeys  int
	ActiveKeys  int
}

func auditIAMUsers(ctx context.Context, client *iam.Client) ([]IAMFinding, error) {
	usersOutput, err := client.ListUsers(ctx, &iam.ListUsersInput{})
	if err != nil {
		return nil, fmt.Errorf("list users: %w", err)
	}

	var findings []IAMFinding

	for _, user := range usersOutput.Users {
		finding := IAMFinding{
			UserName: *user.UserName,
		}

		// Check console access
		_, err := client.GetLoginProfile(ctx, &iam.GetLoginProfileInput{
			UserName: user.UserName,
		})
		if err == nil {
			finding.HasConsole = true
		}

		// Check MFA devices
		mfaOutput, err := client.ListMFADevices(ctx, &iam.ListMFADevicesInput{
			UserName: user.UserName,
		})
		if err == nil {
			finding.HasMFA = len(mfaOutput.MFADevices) > 0
		}

		// Check access keys
		keysOutput, err := client.ListAccessKeys(ctx, &iam.ListAccessKeysInput{
			UserName: user.UserName,
		})
		if err == nil {
			finding.AccessKeys = len(keysOutput.AccessKeyMetadata)
			for _, key := range keysOutput.AccessKeyMetadata {
				if key.Status == "Active" {
					finding.ActiveKeys++
				}
			}
		}

		findings = append(findings, finding)
	}

	return findings, nil
}

func main() {
	ctx := context.Background()
	cfg, err := config.LoadDefaultConfig(ctx)
	if err != nil {
		log.Fatalf("load config: %v", err)
	}

	client := iam.NewFromConfig(cfg)
	findings, err := auditIAMUsers(ctx, client)
	if err != nil {
		log.Fatalf("audit IAM users: %v", err)
	}

	for _, f := range findings {
		status := "OK"
		if f.HasConsole && !f.HasMFA {
			status = "FAIL - console access without MFA"
		}
		fmt.Printf("%-15s keys=%d active=%d console=%v mfa=%v  %s\n",
			f.UserName, f.AccessKeys, f.ActiveKeys, f.HasConsole, f.HasMFA, status)
	}
}

Run it:

go run main.go
admin           keys=1 active=1 console=true  mfa=true   OK
john            keys=1 active=1 console=true  mfa=false  FAIL - console access without MFA
deploy-ci       keys=2 active=2 console=false mfa=false  OK
sarah           keys=0 active=0 console=true  mfa=true   OK

John shows up as a failure. The deploy-ci user has no console access so MFA is not required for it. That looks correct.

Bug: Pagination on MFA Devices

The code above calls ListMFADevices once and checks the length. But this API call paginates. If a user has many MFA devices (virtual, hardware, U2F keys), you might only get the first page.

Run this against a user with multiple MFA devices:

aws iam list-mfa-devices --user-name admin
{
    "MFADevices": [
        {
            "UserName": "admin",
            "SerialNumber": "arn:aws:iam::123456789012:mfa/admin-virtual",
            "EnableDate": "2022-01-15T09:00:00+00:00"
        }
    ],
    "IsTruncated": false
}

In this case IsTruncated is false, so one page is enough. But the code does not check this field. If it were true, the code would miss devices on the next page.

Here is the wrong behavior. With a user that has many devices, the code sees one page and concludes MFA is enabled. But it does not know if there are more pages. That is not a security problem in this direction (false positive is harmless), but it shows sloppy API usage that will bite you in other contexts like listing access keys.

Fix: Use a Paginator

Replace the MFA check with a paginator:

func checkMFA(ctx context.Context, client *iam.Client, userName *string) (bool, error) {
	paginator := iam.NewListMFADevicesPaginator(client, &iam.ListMFADevicesInput{
		UserName: userName,
	})

	for paginator.HasMorePages() {
		page, err := paginator.NextPage(ctx)
		if err != nil {
			return false, fmt.Errorf("list MFA devices: %w", err)
		}
		if len(page.MFADevices) > 0 {
			return true, nil
		}
	}

	return false, nil
}

Now the code walks all pages. As soon as it finds one MFA device, it returns true. If it exhausts all pages with no devices, it returns false. This pattern works for any paginated AWS API.

Update the auditIAMUsers function to use it:

// Replace the MFA check block with:
hasMFA, err := checkMFA(ctx, client, user.UserName)
if err == nil {
    finding.HasMFA = hasMFA
}

The output stays the same for a small account. The difference matters when you have users with multiple MFA devices across pages.

Step 2: Checking Security Groups

Security groups control what network traffic reaches your EC2 instances, RDS databases, and other resources. A group that allows SSH from anywhere is an open door.

CLI: List All Security Groups

aws ec2 describe-security-groups --output json --query 'SecurityGroups[*].{ID:GroupId,Name:GroupName,VPC:VpcId}'
[
    {
        "ID": "sg-0a1b2c3d4e5f6a7b8",
        "Name": "default",
        "VPC": "vpc-0123456789abcdef0"
    },
    {
        "ID": "sg-1a2b3c4d5e6f7a8b9",
        "Name": "web-servers",
        "VPC": "vpc-0123456789abcdef0"
    },
    {
        "ID": "sg-2a3b4c5d6e7f8a9b0",
        "Name": "database",
        "VPC": "vpc-0123456789abcdef0"
    }
]

Three groups. Now find which ones are open to the world.

CLI: Find Groups Open to the World

aws ec2 describe-security-groups \
  --filters "Name=ip-permission.cidr-ip,Values=0.0.0.0/0" \
  --query 'SecurityGroups[*].{ID:GroupId,Name:GroupName}' \
  --output table
----------------------------------------------
|          DescribeSecurityGroups             |
+------------------------+-------------------+
|          ID            |      Name         |
+------------------------+-------------------+
|  sg-1a2b3c4d5e6f7a8b9 |  web-servers      |
+------------------------+-------------------+

The web-servers group allows traffic from 0.0.0.0/0. That might be fine for port 80 and 443. But check what ports are open:

aws ec2 describe-security-groups \
  --group-ids sg-1a2b3c4d5e6f7a8b9 \
  --query 'SecurityGroups[0].IpPermissions[*].{Proto:IpProtocol,From:FromPort,To:ToPort,CIDR:IpRanges[*].CidrIp}'
[
    {
        "Proto": "tcp",
        "From": 80,
        "To": 80,
        "CIDR": ["0.0.0.0/0"]
    },
    {
        "Proto": "tcp",
        "From": 443,
        "To": 443,
        "CIDR": ["0.0.0.0/0"]
    },
    {
        "Proto": "tcp",
        "From": 22,
        "To": 22,
        "CIDR": ["0.0.0.0/0"]
    }
]

Port 80 and 443 open to the world is normal for a web server. Port 22 (SSH) open to 0.0.0.0/0 is a problem. SSH should be restricted to specific IPs or accessed through a bastion host.

Go: Security Group Audit

Define the sensitive ports to check:

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/ec2"
	ec2types "github.com/aws/aws-sdk-go-v2/service/ec2/types"
)

var sensitivePorts = map[int32]string{
	22:   "SSH",
	3306: "MySQL",
	5432: "PostgreSQL",
	6379: "Redis",
	27017: "MongoDB",
	9200: "Elasticsearch",
}

type SGFinding struct {
	GroupID   string
	GroupName string
	Port      int32
	PortName  string
	CIDR      string
}

func auditSecurityGroups(ctx context.Context, client *ec2.Client) ([]SGFinding, error) {
	output, err := client.DescribeSecurityGroups(ctx, &ec2.DescribeSecurityGroupsInput{})
	if err != nil {
		return nil, fmt.Errorf("describe security groups: %w", err)
	}

	var findings []SGFinding

	for _, sg := range output.SecurityGroups {
		for _, perm := range sg.IpPermissions {
			if perm.FromPort == nil {
				continue
			}

			portName, isSensitive := sensitivePorts[*perm.FromPort]
			if !isSensitive {
				continue
			}

			for _, ipRange := range perm.IpRanges {
				if ipRange.CidrIp != nil && *ipRange.CidrIp == "0.0.0.0/0" {
					findings = append(findings, SGFinding{
						GroupID:   *sg.GroupId,
						GroupName: *sg.GroupName,
						Port:      *perm.FromPort,
						PortName:  portName,
						CIDR:      *ipRange.CidrIp,
					})
				}
			}
		}
	}

	return findings, nil
}

func main() {
	ctx := context.Background()
	cfg, err := config.LoadDefaultConfig(ctx)
	if err != nil {
		log.Fatalf("load config: %v", err)
	}

	client := ec2.NewFromConfig(cfg)
	findings, err := auditSecurityGroups(ctx, client)
	if err != nil {
		log.Fatalf("audit security groups: %v", err)
	}

	if len(findings) == 0 {
		fmt.Println("PASS: No sensitive ports open to the world")
		return
	}

	for _, f := range findings {
		fmt.Printf("FAIL: %s (%s) port %d (%s) open to %s\n",
			f.GroupID, f.GroupName, f.Port, f.PortName, f.CIDR)
	}
}

Run it:

go run main.go
FAIL: sg-1a2b3c4d5e6f7a8b9 (web-servers) port 22 (SSH) open to 0.0.0.0/0

Good. It found the SSH rule. But there is a problem.

Bug: Missing IPv6 Checks

The code only checks IpRanges, which contains IPv4 CIDR blocks. AWS security groups also have Ipv6Ranges. The IPv6 equivalent of “open to the world” is ::/0.

A security group can have a rule that allows SSH from ::/0 but not from 0.0.0.0/0. The code above would miss that completely.

Check the CLI to see both:

aws ec2 describe-security-groups \
  --group-ids sg-1a2b3c4d5e6f7a8b9 \
  --query 'SecurityGroups[0].IpPermissions[*].{Proto:IpProtocol,From:FromPort,IPv4:IpRanges,IPv6:Ipv6Ranges}'
[
    {
        "Proto": "tcp",
        "From": 22,
        "IPv4": [{"CidrIp": "0.0.0.0/0"}],
        "IPv6": [{"CidrIpv6": "::/0"}]
    }
]

Both IPv4 and IPv6 are open. The scanner missed the IPv6 rule.

Fix: Check Both IpRanges and Ipv6Ranges

Add IPv6 checking to the inner loop:

func checkPermission(perm ec2types.IpPermission, sg ec2types.SecurityGroup) []SGFinding {
	var findings []SGFinding

	if perm.FromPort == nil {
		return nil
	}

	portName, isSensitive := sensitivePorts[*perm.FromPort]
	if !isSensitive {
		return nil
	}

	// Check IPv4 ranges
	for _, ipRange := range perm.IpRanges {
		if ipRange.CidrIp != nil && *ipRange.CidrIp == "0.0.0.0/0" {
			findings = append(findings, SGFinding{
				GroupID:   *sg.GroupId,
				GroupName: *sg.GroupName,
				Port:      *perm.FromPort,
				PortName:  portName,
				CIDR:      *ipRange.CidrIp,
			})
		}
	}

	// Check IPv6 ranges
	for _, ipv6Range := range perm.Ipv6Ranges {
		if ipv6Range.CidrIpv6 != nil && *ipv6Range.CidrIpv6 == "::/0" {
			findings = append(findings, SGFinding{
				GroupID:   *sg.GroupId,
				GroupName: *sg.GroupName,
				Port:      *perm.FromPort,
				PortName:  portName,
				CIDR:      *ipv6Range.CidrIpv6,
			})
		}
	}

	return findings
}

Update the main loop in auditSecurityGroups:

for _, sg := range output.SecurityGroups {
    for _, perm := range sg.IpPermissions {
        results := checkPermission(perm, sg)
        findings = append(findings, results...)
    }
}

Now the output catches both:

FAIL: sg-1a2b3c4d5e6f7a8b9 (web-servers) port 22 (SSH) open to 0.0.0.0/0
FAIL: sg-1a2b3c4d5e6f7a8b9 (web-servers) port 22 (SSH) open to ::/0

Two findings for the same port. One IPv4, one IPv6. Both are problems.

Step 3: S3 Bucket Policy Audit

S3 buckets can be made public through ACLs, bucket policies, or public access block settings. All three need checking.

CLI: List All Buckets

aws s3api list-buckets --query 'Buckets[*].Name' --output table
------------------------------
|        ListBuckets         |
+----------------------------+
|  app-assets-prod           |
|  company-backups           |
|  website-static            |
|  temp-data-uploads         |
+----------------------------+

CLI: Check Public Access Block

The public access block is the first line of defense. If all four settings are true, the bucket cannot be made public through ACLs or policies.

aws s3api get-public-access-block --bucket app-assets-prod
{
    "PublicAccessBlockConfiguration": {
        "BlockPublicAcls": true,
        "IgnorePublicAcls": true,
        "BlockPublicPolicy": true,
        "RestrictPublicBuckets": true
    }
}

All four set to true. Good.

aws s3api get-public-access-block --bucket temp-data-uploads
An error occurred (NoSuchPublicAccessBlockConfiguration) when calling the
GetPublicAccessBlock operation: The public access block configuration was not
found

No public access block configured. That means the bucket can be made public.

CLI: Check Bucket ACL

aws s3api get-bucket-acl --bucket temp-data-uploads
{
    "Owner": {
        "ID": "abc123..."
    },
    "Grants": [
        {
            "Grantee": {
                "Type": "CanonicalUser",
                "ID": "abc123..."
            },
            "Permission": "FULL_CONTROL"
        }
    ]
}

Only the owner has access. No public grants. But check the bucket policy too.

CLI: Check Bucket Policy

aws s3api get-bucket-policy --bucket temp-data-uploads
{
    "Policy": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"PublicRead\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"s3:GetObject\",\"Resource\":\"arn:aws:s3:::temp-data-uploads/*\"}]}"
}

There it is. "Principal": "*" means anyone in the world can read objects from this bucket. Combined with no public access block, this bucket is publicly readable.

aws s3api get-bucket-policy --bucket company-backups
An error occurred (NoSuchBucketPolicy) when calling the GetBucketPolicy
operation: The bucket policy does not exist

No bucket policy. This is actually fine. No policy means no additional access grants.

Go: S3 Bucket Audit

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"log"
	"strings"

	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/s3"
)

type S3Finding struct {
	BucketName string
	Issue      string
	Severity   string
}

type BucketPolicy struct {
	Statement []struct {
		Principal interface{} `json:"Principal"`
		Effect    string      `json:"Effect"`
	} `json:"Statement"`
}

func hasPrincipalStar(principal interface{}) bool {
	switch p := principal.(type) {
	case string:
		return p == "*"
	case map[string]interface{}:
		for _, v := range p {
			switch val := v.(type) {
			case string:
				if val == "*" {
					return true
				}
			case []interface{}:
				for _, item := range val {
					if str, ok := item.(string); ok && str == "*" {
						return true
					}
				}
			}
		}
	}
	return false
}

func auditS3Buckets(ctx context.Context, client *s3.Client) ([]S3Finding, error) {
	bucketsOutput, err := client.ListBuckets(ctx, &s3.ListBucketsInput{})
	if err != nil {
		return nil, fmt.Errorf("list buckets: %w", err)
	}

	var findings []S3Finding

	for _, bucket := range bucketsOutput.Buckets {
		name := *bucket.Name

		// Check public access block
		pubBlock, err := client.GetPublicAccessBlock(ctx, &s3.GetPublicAccessBlockInput{
			Bucket: &name,
		})
		if err != nil {
			findings = append(findings, S3Finding{
				BucketName: name,
				Issue:      "no public access block configured",
				Severity:   "WARN",
			})
		} else {
			cfg := pubBlock.PublicAccessBlockConfiguration
			if !boolVal(cfg.BlockPublicAcls) || !boolVal(cfg.IgnorePublicAcls) ||
				!boolVal(cfg.BlockPublicPolicy) || !boolVal(cfg.RestrictPublicBuckets) {
				findings = append(findings, S3Finding{
					BucketName: name,
					Issue:      "public access block is not fully enabled",
					Severity:   "WARN",
				})
			}
		}

		// Check bucket policy
		policyOutput, err := client.GetBucketPolicy(ctx, &s3.GetBucketPolicyInput{
			Bucket: &name,
		})
		if err != nil {
			// No policy - treat as insecure
			findings = append(findings, S3Finding{
				BucketName: name,
				Issue:      "no bucket policy found (potentially insecure)",
				Severity:   "FAIL",
			})
		} else if policyOutput.Policy != nil {
			var policy BucketPolicy
			if err := json.Unmarshal([]byte(*policyOutput.Policy), &policy); err == nil {
				for _, stmt := range policy.Statement {
					if strings.EqualFold(stmt.Effect, "Allow") && hasPrincipalStar(stmt.Principal) {
						findings = append(findings, S3Finding{
							BucketName: name,
							Issue:      "bucket policy allows public access (Principal: *)",
							Severity:   "FAIL",
						})
						break
					}
				}
			}
		}
	}

	return findings, nil
}

func boolVal(b bool) bool {
	return b
}

func main() {
	ctx := context.Background()
	cfg, err := config.LoadDefaultConfig(ctx)
	if err != nil {
		log.Fatalf("load config: %v", err)
	}

	client := s3.NewFromConfig(cfg)
	findings, err := auditS3Buckets(ctx, client)
	if err != nil {
		log.Fatalf("audit S3 buckets: %v", err)
	}

	for _, f := range findings {
		fmt.Printf("%-5s  %-25s  %s\n", f.Severity, f.BucketName, f.Issue)
	}
}

Run it:

go run main.go
WARN   temp-data-uploads          no public access block configured
FAIL   temp-data-uploads          bucket policy allows public access (Principal: *)
FAIL   company-backups            no bucket policy found (potentially insecure)
FAIL   app-assets-prod            no bucket policy found (potentially insecure)
FAIL   website-static             no bucket policy found (potentially insecure)

Wait. Three buckets show as FAIL because they have no bucket policy. But having no bucket policy is not insecure. It means no additional access is granted through a policy. The default is “deny all.”

Bug: Treating No Policy as Insecure

The code treats a NoSuchBucketPolicy error as a FAIL. Look at this block:

if err != nil {
    // No policy - treat as insecure
    findings = append(findings, S3Finding{
        BucketName: name,
        Issue:      "no bucket policy found (potentially insecure)",
        Severity:   "FAIL",
    })
}

This is wrong. When GetBucketPolicy returns an error, it could mean:

  1. The bucket has no policy (NoSuchBucketPolicy) – this is safe
  2. You do not have permission to read the policy – this is unknown
  3. Some other API error – this needs investigation

The code treats all three the same way.

Fix: Check the Error Type

Check if the error is specifically NoSuchBucketPolicy. If it is, the bucket has no policy and that is safe. For any other error, report it as a warning.

import (
	"errors"

	"github.com/aws/smithy-go"
)

Replace the bucket policy check:

// Check bucket policy
policyOutput, err := client.GetBucketPolicy(ctx, &s3.GetBucketPolicyInput{
    Bucket: &name,
})
if err != nil {
    var apiErr smithy.APIError
    if errors.As(err, &apiErr) && apiErr.ErrorCode() == "NoSuchBucketPolicy" {
        // No policy means no public access through policy. This is safe.
    } else {
        findings = append(findings, S3Finding{
            BucketName: name,
            Issue:      fmt.Sprintf("could not read bucket policy: %v", err),
            Severity:   "WARN",
        })
    }
} else if policyOutput.Policy != nil {
    var policy BucketPolicy
    if err := json.Unmarshal([]byte(*policyOutput.Policy), &policy); err == nil {
        for _, stmt := range policy.Statement {
            if strings.EqualFold(stmt.Effect, "Allow") && hasPrincipalStar(stmt.Principal) {
                findings = append(findings, S3Finding{
                    BucketName: name,
                    Issue:      "bucket policy allows public access (Principal: *)",
                    Severity:   "FAIL",
                })
                break
            }
        }
    }
}

You need to add the smithy-go dependency:

go get github.com/aws/smithy-go

Now the output is correct:

WARN   temp-data-uploads          no public access block configured
FAIL   temp-data-uploads          bucket policy allows public access (Principal: *)

Only the bucket that actually has a public policy shows up as a failure. The false positives are gone.

Step 4: CloudTrail and Logging Checks

CloudTrail records API calls in your account. Without it, you have no way to know who did what. A trail can exist but be turned off. You need to check both.

CLI: List Trails

aws cloudtrail describe-trails --query 'trailList[*].{Name:Name,IsMultiRegion:IsMultiRegionTrail,Bucket:S3BucketName}'
[
    {
        "Name": "management-trail",
        "IsMultiRegion": true,
        "Bucket": "company-cloudtrail-logs"
    }
]

One trail. It is multi-region, which means it captures API calls in every region. Good.

CLI: Check if Logging is Active

aws cloudtrail get-trail-status --name management-trail
{
    "IsLogging": true,
    "LatestDeliveryTime": "2024-02-28T15:30:00+00:00",
    "StartLoggingTime": "2023-01-15T08:00:00+00:00",
    "LatestNotificationTime": "2024-02-28T15:30:00+00:00"
}

IsLogging is true and logs were delivered recently. This trail is working.

But what if the trail exists and is not logging?

aws cloudtrail get-trail-status --name old-trail
{
    "IsLogging": false,
    "StopLoggingTime": "2023-11-01T12:00:00+00:00"
}

The trail exists but was stopped three months ago. Nobody noticed.

CLI: Check VPC Flow Logs

aws ec2 describe-flow-logs --query 'FlowLogs[*].{ID:FlowLogId,Resource:ResourceId,Status:FlowLogStatus,Dest:LogDestinationType}'
[
    {
        "ID": "fl-0123456789abcdef0",
        "Resource": "vpc-0123456789abcdef0",
        "Status": "ACTIVE",
        "Dest": "cloud-watch-logs"
    }
]

VPC flow logs exist and are active. They record network traffic metadata for the VPC.

Go: CloudTrail Audit

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/cloudtrail"
)

type TrailFinding struct {
	TrailName     string
	Issue         string
	Severity      string
}

func auditCloudTrail(ctx context.Context, client *cloudtrail.Client) ([]TrailFinding, error) {
	trailsOutput, err := client.DescribeTrails(ctx, &cloudtrail.DescribeTrailsInput{})
	if err != nil {
		return nil, fmt.Errorf("describe trails: %w", err)
	}

	if len(trailsOutput.TrailList) == 0 {
		return []TrailFinding{{
			TrailName: "(none)",
			Issue:     "no CloudTrail trails configured",
			Severity:  "FAIL",
		}}, nil
	}

	var findings []TrailFinding
	hasMultiRegion := false

	for _, trail := range trailsOutput.TrailList {
		name := ""
		if trail.Name != nil {
			name = *trail.Name
		}

		if trail.IsMultiRegionTrail != nil && *trail.IsMultiRegionTrail {
			hasMultiRegion = true
		}

		// Check if logging is active
		if trail.TrailARN == nil {
			continue
		}

		// Just check that the trail exists
		fmt.Printf("Found trail: %s (multi-region: %v)\n", name,
			trail.IsMultiRegionTrail != nil && *trail.IsMultiRegionTrail)
	}

	if !hasMultiRegion {
		findings = append(findings, TrailFinding{
			TrailName: "(account)",
			Issue:     "no multi-region trail found - API calls in other regions are not logged",
			Severity:  "FAIL",
		})
	}

	return findings, nil
}

func main() {
	ctx := context.Background()
	cfg, err := config.LoadDefaultConfig(ctx)
	if err != nil {
		log.Fatalf("load config: %v", err)
	}

	client := cloudtrail.NewFromConfig(cfg)
	findings, err := auditCloudTrail(ctx, client)
	if err != nil {
		log.Fatalf("audit CloudTrail: %v", err)
	}

	for _, f := range findings {
		fmt.Printf("%-5s  %-25s  %s\n", f.Severity, f.TrailName, f.Issue)
	}

	if len(findings) == 0 {
		fmt.Println("PASS: CloudTrail is properly configured")
	}
}

Run it:

go run main.go
Found trail: management-trail (multi-region: true)
PASS: CloudTrail is properly configured

Looks correct. But there is a bug.

Bug: Trail Exists but Is Not Logging

The code checks if trails exist and if any are multi-region. It does not check if logging is actually turned on. A trail can be created and then stopped. The DescribeTrails call only returns the trail configuration, not its current status.

With the stopped trail from earlier, the code would output:

Found trail: management-trail (multi-region: true)
Found trail: old-trail (multi-region: false)
PASS: CloudTrail is properly configured

It says PASS even though old-trail has logging turned off. More importantly, even management-trail could have logging turned off and the code would still say PASS.

Fix: Call GetTrailStatus for Each Trail

Add a status check:

func auditCloudTrail(ctx context.Context, client *cloudtrail.Client) ([]TrailFinding, error) {
	trailsOutput, err := client.DescribeTrails(ctx, &cloudtrail.DescribeTrailsInput{})
	if err != nil {
		return nil, fmt.Errorf("describe trails: %w", err)
	}

	if len(trailsOutput.TrailList) == 0 {
		return []TrailFinding{{
			TrailName: "(none)",
			Issue:     "no CloudTrail trails configured",
			Severity:  "FAIL",
		}}, nil
	}

	var findings []TrailFinding
	hasActiveMultiRegion := false

	for _, trail := range trailsOutput.TrailList {
		name := ""
		if trail.Name != nil {
			name = *trail.Name
		}

		if trail.TrailARN == nil {
			continue
		}

		// Check if logging is actually active
		statusOutput, err := client.GetTrailStatus(ctx, &cloudtrail.GetTrailStatusInput{
			Name: trail.TrailARN,
		})
		if err != nil {
			findings = append(findings, TrailFinding{
				TrailName: name,
				Issue:     fmt.Sprintf("could not get trail status: %v", err),
				Severity:  "WARN",
			})
			continue
		}

		isLogging := statusOutput.IsLogging
		isMultiRegion := trail.IsMultiRegionTrail != nil && *trail.IsMultiRegionTrail

		if !isLogging {
			findings = append(findings, TrailFinding{
				TrailName: name,
				Issue:     "trail exists but logging is stopped",
				Severity:  "FAIL",
			})
		}

		if isLogging && isMultiRegion {
			hasActiveMultiRegion = true
		}
	}

	if !hasActiveMultiRegion {
		findings = append(findings, TrailFinding{
			TrailName: "(account)",
			Issue:     "no active multi-region trail - API calls in other regions may not be logged",
			Severity:  "FAIL",
		})
	}

	return findings, nil
}

Now the output correctly flags the stopped trail:

FAIL   old-trail                  trail exists but logging is stopped
PASS: CloudTrail is properly configured

The active multi-region trail still passes. The stopped trail is flagged. This is the correct behavior.

Step 5: Access Key Age and Rotation

AWS access keys that are never rotated are a risk. If a key is leaked, it stays valid until someone disables it. Keys older than 90 days should be rotated.

CLI: List Access Keys

aws iam list-access-keys --user-name deploy-ci --output text
ACCESSKEYMETADATA	AKIA...KEY1EXAMPLE	deploy-ci	Active	2023-01-15T10:00:00+00:00
ACCESSKEYMETADATA	AKIA...KEY2EXAMPLE	deploy-ci	Active	2023-09-01T14:30:00+00:00

Two keys. The first was created in January 2023. The second in September 2023. Both are over 90 days old.

CLI: Calculate Key Age

Use date math in bash to find how old each key is:

key_date="2023-01-15T10:00:00+00:00"
key_epoch=$(date -d "$key_date" +%s)
now_epoch=$(date +%s)
age_days=$(( (now_epoch - key_epoch) / 86400 ))
echo "Key age: $age_days days"
Key age: 411 days

411 days. That key should have been rotated a long time ago.

CLI: One-liner to Find Old Keys

for user in $(aws iam list-users --query 'Users[*].UserName' --output text); do
  aws iam list-access-keys --user-name "$user" --query "AccessKeyMetadata[?Status=='Active'].[UserName,AccessKeyId,CreateDate]" --output text
done
admin	AKIA...ADMKEY	2022-01-15T08:30:00+00:00
john	AKIA...JHNKEY	2023-03-20T14:20:00+00:00
deploy-ci	AKIA...KEY1	2023-01-15T10:00:00+00:00
deploy-ci	AKIA...KEY2	2023-09-01T14:30:00+00:00

All four active keys listed. Now you can pipe this through date math to find which are older than 90 days.

Go: Access Key Age Audit

package main

import (
	"context"
	"fmt"
	"log"
	"time"

	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/iam"
)

type KeyFinding struct {
	UserName    string
	AccessKeyID string
	AgeDays     int
	Status      string
	Severity    string
}

func auditAccessKeys(ctx context.Context, client *iam.Client, maxAgeDays int) ([]KeyFinding, error) {
	usersOutput, err := client.ListUsers(ctx, &iam.ListUsersInput{})
	if err != nil {
		return nil, fmt.Errorf("list users: %w", err)
	}

	var findings []KeyFinding
	now := time.Now()

	for _, user := range usersOutput.Users {
		keysOutput, err := client.ListAccessKeys(ctx, &iam.ListAccessKeysInput{
			UserName: user.UserName,
		})
		if err != nil {
			continue
		}

		for _, key := range keysOutput.AccessKeyMetadata {
			if key.CreateDate == nil || key.AccessKeyId == nil {
				continue
			}

			age := now.Sub(*key.CreateDate)
			ageDays := int(age.Hours() / 24)

			status := string(key.Status)
			severity := "OK"

			if status == "Active" && ageDays > maxAgeDays {
				severity = "FAIL"
			} else if status == "Active" && ageDays > maxAgeDays/2 {
				severity = "WARN"
			}

			if severity != "OK" {
				findings = append(findings, KeyFinding{
					UserName:    *user.UserName,
					AccessKeyID: *key.AccessKeyId,
					AgeDays:     ageDays,
					Status:      status,
					Severity:    severity,
				})
			}
		}
	}

	return findings, nil
}

func main() {
	ctx := context.Background()
	cfg, err := config.LoadDefaultConfig(ctx)
	if err != nil {
		log.Fatalf("load config: %v", err)
	}

	client := iam.NewFromConfig(cfg)
	findings, err := auditAccessKeys(ctx, client, 90)
	if err != nil {
		log.Fatalf("audit access keys: %v", err)
	}

	if len(findings) == 0 {
		fmt.Println("PASS: No access keys older than 90 days")
		return
	}

	for _, f := range findings {
		fmt.Printf("%-5s  %-15s  %-20s  %d days old (%s)\n",
			f.Severity, f.UserName, f.AccessKeyID, f.AgeDays, f.Status)
	}
}

Run it:

go run main.go
FAIL   admin            AKIA...ADMKEY         762 days old (Active)
FAIL   john             AKIA...JHNKEY         346 days old (Active)
FAIL   deploy-ci        AKIA...KEY1           411 days old (Active)
FAIL   deploy-ci        AKIA...KEY2           180 days old (Active)

All four keys are older than 90 days. The code correctly calculates the age.

But wait. This code has a subtle problem.

Bug: Time Parsing Format

The code uses *key.CreateDate directly as a time.Time. This works because the AWS SDK v2 returns *time.Time already parsed. But earlier versions of the SDK returned strings, and some AWS API responses in JSON format use 2023-01-15T10:00:00+00:00.

If you were parsing the string yourself, you might try:

parsed, err := time.Parse(time.RFC3339, "2023-01-15T10:00:00+00:00")

This works for +00:00 but fails for some AWS formats. AWS sometimes returns dates like 2023-01-15T10:00:00Z (with Z instead of +00:00) or 20230115T100000Z (compact format in some responses).

Try parsing a compact format:

parsed, err := time.Parse(time.RFC3339, "20230115T100000Z")
// err: parsing time "20230115T100000Z" as "2006-01-02T15:04:05Z07:00": cannot parse...

That fails.

Fix: Use the SDK’s Time Type Directly

The AWS SDK v2 handles time parsing for you. The CreateDate field is already *time.Time. Never parse the time string yourself when using the SDK.

// Correct: use the SDK's parsed time directly
age := now.Sub(*key.CreateDate)
ageDays := int(age.Hours() / 24)

If you are working with raw JSON from the CLI (not the SDK), use this format:

// For AWS CLI JSON output
const awsTimeFormat = "2006-01-02T15:04:05-07:00"
parsed, err := time.Parse(awsTimeFormat, "2023-01-15T10:00:00+00:00")

Or use time.RFC3339 which handles both Z and +00:00 suffixes:

// RFC3339 handles both Z and +00:00
parsed, err := time.Parse(time.RFC3339, dateString)

The key point: when using the AWS SDK v2, the time fields are already parsed. Do not convert to string and parse again. The SDK version we are using does this correctly, so the code from the beginning of this step works as-is. The bug only shows up if you try to handle the raw API response yourself.

Step 6: Complete Security Scanner with Report

Now combine everything into a single scanner. It runs all five checks and produces a colored terminal report with a summary.

Project Structure

awsaudit/
  main.go         # entry point, report output
  iam.go          # IAM user and access key audits
  secgroups.go    # security group audit
  s3.go           # S3 bucket audit
  cloudtrail.go   # CloudTrail audit
  go.mod
  go.sum

The Finding Type

All checks produce the same type of finding:

// finding.go
package main

// Severity levels
const (
	SevPass = "PASS"
	SevWarn = "WARN"
	SevFail = "FAIL"
)

type Finding struct {
	Check    string // which audit produced this
	Resource string // what resource has the issue
	Issue    string // what is wrong
	Severity string // PASS, WARN, FAIL
	Fix      string // how to fix it
}

IAM Audit (iam.go)

This combines the user audit and access key audit from Steps 1 and 5:

// iam.go
package main

import (
	"context"
	"fmt"
	"time"

	"github.com/aws/aws-sdk-go-v2/service/iam"
)

func auditIAM(ctx context.Context, client *iam.Client) ([]Finding, error) {
	var findings []Finding

	paginator := iam.NewListUsersPaginator(client, &iam.ListUsersInput{})
	now := time.Now()

	for paginator.HasMorePages() {
		page, err := paginator.NextPage(ctx)
		if err != nil {
			return nil, fmt.Errorf("list users: %w", err)
		}

		for _, user := range page.Users {
			name := *user.UserName

			// Check console access + MFA
			_, loginErr := client.GetLoginProfile(ctx, &iam.GetLoginProfileInput{
				UserName: user.UserName,
			})
			hasConsole := loginErr == nil

			hasMFA := false
			mfaPaginator := iam.NewListMFADevicesPaginator(client, &iam.ListMFADevicesInput{
				UserName: user.UserName,
			})
			for mfaPaginator.HasMorePages() {
				mfaPage, err := mfaPaginator.NextPage(ctx)
				if err != nil {
					break
				}
				if len(mfaPage.MFADevices) > 0 {
					hasMFA = true
					break
				}
			}

			if hasConsole && !hasMFA {
				findings = append(findings, Finding{
					Check:    "IAM-MFA",
					Resource: name,
					Issue:    "console access enabled without MFA",
					Severity: SevFail,
					Fix:      fmt.Sprintf("aws iam enable-mfa-device for user %s", name),
				})
			}

			if hasConsole && hasMFA {
				findings = append(findings, Finding{
					Check:    "IAM-MFA",
					Resource: name,
					Issue:    "console access with MFA enabled",
					Severity: SevPass,
				})
			}

			// Check access key age
			keysPaginator := iam.NewListAccessKeysPaginator(client, &iam.ListAccessKeysInput{
				UserName: user.UserName,
			})
			for keysPaginator.HasMorePages() {
				keysPage, err := keysPaginator.NextPage(ctx)
				if err != nil {
					break
				}
				for _, key := range keysPage.AccessKeyMetadata {
					if key.CreateDate == nil || key.AccessKeyId == nil {
						continue
					}
					if string(key.Status) != "Active" {
						continue
					}

					ageDays := int(now.Sub(*key.CreateDate).Hours() / 24)
					keyID := *key.AccessKeyId

					if ageDays > 90 {
						findings = append(findings, Finding{
							Check:    "IAM-KeyAge",
							Resource: fmt.Sprintf("%s/%s", name, keyID),
							Issue:    fmt.Sprintf("access key is %d days old (max 90)", ageDays),
							Severity: SevFail,
							Fix:      fmt.Sprintf("rotate key %s for user %s", keyID, name),
						})
					} else if ageDays > 45 {
						findings = append(findings, Finding{
							Check:    "IAM-KeyAge",
							Resource: fmt.Sprintf("%s/%s", name, keyID),
							Issue:    fmt.Sprintf("access key is %d days old (rotate soon)", ageDays),
							Severity: SevWarn,
						})
					} else {
						findings = append(findings, Finding{
							Check:    "IAM-KeyAge",
							Resource: fmt.Sprintf("%s/%s", name, keyID),
							Issue:    fmt.Sprintf("access key is %d days old", ageDays),
							Severity: SevPass,
						})
					}
				}
			}
		}
	}

	return findings, nil
}

Security Groups Audit (secgroups.go)

// secgroups.go
package main

import (
	"context"
	"fmt"

	"github.com/aws/aws-sdk-go-v2/service/ec2"
	ec2types "github.com/aws/aws-sdk-go-v2/service/ec2/types"
)

var sensitivePortMap = map[int32]string{
	22:    "SSH",
	3306:  "MySQL",
	5432:  "PostgreSQL",
	6379:  "Redis",
	27017: "MongoDB",
	9200:  "Elasticsearch",
}

func auditSecurityGroupsAll(ctx context.Context, client *ec2.Client) ([]Finding, error) {
	var findings []Finding

	paginator := ec2.NewDescribeSecurityGroupsPaginator(client, &ec2.DescribeSecurityGroupsInput{})

	for paginator.HasMorePages() {
		page, err := paginator.NextPage(ctx)
		if err != nil {
			return nil, fmt.Errorf("describe security groups: %w", err)
		}

		for _, sg := range page.SecurityGroups {
			sgFindings := checkSecurityGroup(sg)
			findings = append(findings, sgFindings...)
		}
	}

	return findings, nil
}

func checkSecurityGroup(sg ec2types.SecurityGroup) []Finding {
	var findings []Finding
	groupLabel := fmt.Sprintf("%s (%s)", *sg.GroupId, *sg.GroupName)
	hasSensitiveOpen := false

	for _, perm := range sg.IpPermissions {
		if perm.FromPort == nil {
			continue
		}
		port := *perm.FromPort
		portName, isSensitive := sensitivePortMap[port]
		if !isSensitive {
			continue
		}

		// Check IPv4
		for _, ipRange := range perm.IpRanges {
			if ipRange.CidrIp != nil && *ipRange.CidrIp == "0.0.0.0/0" {
				hasSensitiveOpen = true
				findings = append(findings, Finding{
					Check:    "SG-OpenPort",
					Resource: groupLabel,
					Issue:    fmt.Sprintf("port %d (%s) open to 0.0.0.0/0", port, portName),
					Severity: SevFail,
					Fix:      fmt.Sprintf("restrict port %d to specific IPs or remove the rule", port),
				})
			}
		}

		// Check IPv6
		for _, ipv6Range := range perm.Ipv6Ranges {
			if ipv6Range.CidrIpv6 != nil && *ipv6Range.CidrIpv6 == "::/0" {
				hasSensitiveOpen = true
				findings = append(findings, Finding{
					Check:    "SG-OpenPort",
					Resource: groupLabel,
					Issue:    fmt.Sprintf("port %d (%s) open to ::/0 (IPv6)", port, portName),
					Severity: SevFail,
					Fix:      fmt.Sprintf("restrict port %d to specific IPv6 addresses or remove the rule", port),
				})
			}
		}
	}

	if !hasSensitiveOpen {
		findings = append(findings, Finding{
			Check:    "SG-OpenPort",
			Resource: groupLabel,
			Issue:    "no sensitive ports open to the world",
			Severity: SevPass,
		})
	}

	return findings
}

S3 Audit (s3.go)

// s3.go
package main

import (
	"context"
	"encoding/json"
	"errors"
	"fmt"
	"strings"

	"github.com/aws/aws-sdk-go-v2/service/s3"
	"github.com/aws/smithy-go"
)

type bucketPolicyDoc struct {
	Statement []struct {
		Effect    string      `json:"Effect"`
		Principal interface{} `json:"Principal"`
	} `json:"Statement"`
}

func auditS3All(ctx context.Context, client *s3.Client) ([]Finding, error) {
	bucketsOutput, err := client.ListBuckets(ctx, &s3.ListBucketsInput{})
	if err != nil {
		return nil, fmt.Errorf("list buckets: %w", err)
	}

	var findings []Finding

	for _, bucket := range bucketsOutput.Buckets {
		name := *bucket.Name
		bucketFindings := auditOneBucket(ctx, client, name)
		findings = append(findings, bucketFindings...)
	}

	return findings, nil
}

func auditOneBucket(ctx context.Context, client *s3.Client, name string) []Finding {
	var findings []Finding

	// Check public access block
	pubBlock, err := client.GetPublicAccessBlock(ctx, &s3.GetPublicAccessBlockInput{
		Bucket: &name,
	})
	if err != nil {
		var apiErr smithy.APIError
		if errors.As(err, &apiErr) && apiErr.ErrorCode() == "NoSuchPublicAccessBlockConfiguration" {
			findings = append(findings, Finding{
				Check:    "S3-PublicBlock",
				Resource: name,
				Issue:    "no public access block configured",
				Severity: SevWarn,
				Fix:      fmt.Sprintf("aws s3api put-public-access-block --bucket %s --public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true", name),
			})
		}
	} else {
		cfg := pubBlock.PublicAccessBlockConfiguration
		allBlocked := cfg.BlockPublicAcls && cfg.IgnorePublicAcls &&
			cfg.BlockPublicPolicy && cfg.RestrictPublicBuckets
		if allBlocked {
			findings = append(findings, Finding{
				Check:    "S3-PublicBlock",
				Resource: name,
				Issue:    "public access block fully enabled",
				Severity: SevPass,
			})
		} else {
			findings = append(findings, Finding{
				Check:    "S3-PublicBlock",
				Resource: name,
				Issue:    "public access block is partially enabled",
				Severity: SevWarn,
				Fix:      "enable all four public access block settings",
			})
		}
	}

	// Check bucket policy for Principal: *
	policyOutput, err := client.GetBucketPolicy(ctx, &s3.GetBucketPolicyInput{
		Bucket: &name,
	})
	if err != nil {
		var apiErr smithy.APIError
		if errors.As(err, &apiErr) && apiErr.ErrorCode() == "NoSuchBucketPolicy" {
			// No policy is safe - no additional access granted
		} else {
			findings = append(findings, Finding{
				Check:    "S3-Policy",
				Resource: name,
				Issue:    fmt.Sprintf("could not read bucket policy: %v", err),
				Severity: SevWarn,
			})
		}
	} else if policyOutput.Policy != nil {
		var doc bucketPolicyDoc
		if err := json.Unmarshal([]byte(*policyOutput.Policy), &doc); err == nil {
			for _, stmt := range doc.Statement {
				if strings.EqualFold(stmt.Effect, "Allow") && principalIsStar(stmt.Principal) {
					findings = append(findings, Finding{
						Check:    "S3-Policy",
						Resource: name,
						Issue:    "bucket policy allows public access (Principal: *)",
						Severity: SevFail,
						Fix:      "remove the statement with Principal: * or restrict to specific accounts",
					})
					break
				}
			}
		}
	}

	return findings
}

func principalIsStar(p interface{}) bool {
	switch v := p.(type) {
	case string:
		return v == "*"
	case map[string]interface{}:
		for _, val := range v {
			switch inner := val.(type) {
			case string:
				if inner == "*" {
					return true
				}
			case []interface{}:
				for _, item := range inner {
					if s, ok := item.(string); ok && s == "*" {
						return true
					}
				}
			}
		}
	}
	return false
}

CloudTrail Audit (cloudtrail.go)

// cloudtrail.go
package main

import (
	"context"
	"fmt"

	"github.com/aws/aws-sdk-go-v2/service/cloudtrail"
)

func auditCloudTrailAll(ctx context.Context, client *cloudtrail.Client) ([]Finding, error) {
	trailsOutput, err := client.DescribeTrails(ctx, &cloudtrail.DescribeTrailsInput{})
	if err != nil {
		return nil, fmt.Errorf("describe trails: %w", err)
	}

	if len(trailsOutput.TrailList) == 0 {
		return []Finding{{
			Check:    "CT-Enabled",
			Resource: "(account)",
			Issue:    "no CloudTrail trails configured",
			Severity: SevFail,
			Fix:      "create a multi-region trail with aws cloudtrail create-trail",
		}}, nil
	}

	var findings []Finding
	hasActiveMultiRegion := false

	for _, trail := range trailsOutput.TrailList {
		name := "(unknown)"
		if trail.Name != nil {
			name = *trail.Name
		}
		if trail.TrailARN == nil {
			continue
		}

		isMultiRegion := trail.IsMultiRegionTrail != nil && *trail.IsMultiRegionTrail

		statusOutput, err := client.GetTrailStatus(ctx, &cloudtrail.GetTrailStatusInput{
			Name: trail.TrailARN,
		})
		if err != nil {
			findings = append(findings, Finding{
				Check:    "CT-Status",
				Resource: name,
				Issue:    fmt.Sprintf("could not check trail status: %v", err),
				Severity: SevWarn,
			})
			continue
		}

		if !statusOutput.IsLogging {
			findings = append(findings, Finding{
				Check:    "CT-Status",
				Resource: name,
				Issue:    "trail exists but logging is stopped",
				Severity: SevFail,
				Fix:      fmt.Sprintf("aws cloudtrail start-logging --name %s", name),
			})
		} else {
			findings = append(findings, Finding{
				Check:    "CT-Status",
				Resource: name,
				Issue:    "trail is actively logging",
				Severity: SevPass,
			})
		}

		if statusOutput.IsLogging && isMultiRegion {
			hasActiveMultiRegion = true
		}
	}

	if !hasActiveMultiRegion {
		findings = append(findings, Finding{
			Check:    "CT-MultiRegion",
			Resource: "(account)",
			Issue:    "no active multi-region trail found",
			Severity: SevFail,
			Fix:      "enable multi-region on your active trail",
		})
	}

	return findings, nil
}

Main Entry Point with Colored Output (main.go)

// main.go
package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/cloudtrail"
	"github.com/aws/aws-sdk-go-v2/service/ec2"
	"github.com/aws/aws-sdk-go-v2/service/iam"
	"github.com/aws/aws-sdk-go-v2/service/s3"
)

// ANSI color codes
const (
	colorReset  = "\033[0m"
	colorRed    = "\033[31m"
	colorGreen  = "\033[32m"
	colorYellow = "\033[33m"
	colorCyan   = "\033[36m"
	colorBold   = "\033[1m"
)

func colorForSeverity(sev string) string {
	switch sev {
	case SevPass:
		return colorGreen
	case SevWarn:
		return colorYellow
	case SevFail:
		return colorRed
	default:
		return colorReset
	}
}

func printBanner() {
	fmt.Printf("%s%s", colorBold, colorCyan)
	fmt.Println("============================================")
	fmt.Println("       AWS Security Audit Scanner")
	fmt.Println("============================================")
	fmt.Printf("%s\n", colorReset)
}

func printSection(name string) {
	fmt.Printf("\n%s%s--- %s ---%s\n", colorBold, colorCyan, name, colorReset)
}

func printFinding(f Finding) {
	color := colorForSeverity(f.Severity)
	fmt.Printf("  %s%-5s%s  %-30s  %s\n", color, f.Severity, colorReset, f.Resource, f.Issue)
	if f.Fix != "" && f.Severity != SevPass {
		fmt.Printf("         %s> Fix: %s%s\n", colorYellow, f.Fix, colorReset)
	}
}

func printSummary(findings []Finding) {
	pass, warn, fail := 0, 0, 0
	for _, f := range findings {
		switch f.Severity {
		case SevPass:
			pass++
		case SevWarn:
			warn++
		case SevFail:
			fail++
		}
	}

	total := pass + warn + fail
	fmt.Printf("\n%s%s", colorBold, colorCyan)
	fmt.Println("============================================")
	fmt.Println("               Summary")
	fmt.Println("============================================")
	fmt.Printf("%s", colorReset)
	fmt.Printf("  Total checks:  %d\n", total)
	fmt.Printf("  %sPassed:       %d%s\n", colorGreen, pass, colorReset)
	fmt.Printf("  %sWarnings:     %d%s\n", colorYellow, warn, colorReset)
	fmt.Printf("  %sFailures:     %d%s\n", colorRed, fail, colorReset)
	fmt.Println()

	if fail > 0 {
		fmt.Printf("  %s%sAction required: %d finding(s) need attention.%s\n",
			colorBold, colorRed, fail, colorReset)
	} else if warn > 0 {
		fmt.Printf("  %s%sReview recommended: %d warning(s) found.%s\n",
			colorBold, colorYellow, warn, colorReset)
	} else {
		fmt.Printf("  %s%sAll checks passed.%s\n",
			colorBold, colorGreen, colorReset)
	}
}

func main() {
	ctx := context.Background()
	cfg, err := config.LoadDefaultConfig(ctx)
	if err != nil {
		log.Fatalf("load AWS config: %v", err)
	}

	printBanner()

	var allFindings []Finding

	// IAM audit
	printSection("IAM Users and Access Keys")
	iamClient := iam.NewFromConfig(cfg)
	iamFindings, err := auditIAM(ctx, iamClient)
	if err != nil {
		fmt.Fprintf(os.Stderr, "  ERROR: %v\n", err)
	} else {
		for _, f := range iamFindings {
			printFinding(f)
		}
		allFindings = append(allFindings, iamFindings...)
	}

	// Security groups audit
	printSection("Security Groups")
	ec2Client := ec2.NewFromConfig(cfg)
	sgFindings, err := auditSecurityGroupsAll(ctx, ec2Client)
	if err != nil {
		fmt.Fprintf(os.Stderr, "  ERROR: %v\n", err)
	} else {
		for _, f := range sgFindings {
			printFinding(f)
		}
		allFindings = append(allFindings, sgFindings...)
	}

	// S3 audit
	printSection("S3 Buckets")
	s3Client := s3.NewFromConfig(cfg)
	s3Findings, err := auditS3All(ctx, s3Client)
	if err != nil {
		fmt.Fprintf(os.Stderr, "  ERROR: %v\n", err)
	} else {
		for _, f := range s3Findings {
			printFinding(f)
		}
		allFindings = append(allFindings, s3Findings...)
	}

	// CloudTrail audit
	printSection("CloudTrail")
	ctClient := cloudtrail.NewFromConfig(cfg)
	ctFindings, err := auditCloudTrailAll(ctx, ctClient)
	if err != nil {
		fmt.Fprintf(os.Stderr, "  ERROR: %v\n", err)
	} else {
		for _, f := range ctFindings {
			printFinding(f)
		}
		allFindings = append(allFindings, ctFindings...)
	}

	// Summary
	printSummary(allFindings)
}

Running the Complete Scanner

go build -o awsaudit .
./awsaudit

Here is what the output looks like against a real account:

============================================
       AWS Security Audit Scanner
============================================

--- IAM Users and Access Keys ---
  PASS   admin                           console access with MFA enabled
  FAIL   john                            console access enabled without MFA
         > Fix: aws iam enable-mfa-device for user john
  FAIL   admin/AKIA...ADMKEY             access key is 762 days old (max 90)
         > Fix: rotate key AKIA...ADMKEY for user admin
  FAIL   john/AKIA...JHNKEY             access key is 346 days old (max 90)
         > Fix: rotate key AKIA...JHNKEY for user john
  FAIL   deploy-ci/AKIA...KEY1          access key is 411 days old (max 90)
         > Fix: rotate key AKIA...KEY1 for user deploy-ci
  FAIL   deploy-ci/AKIA...KEY2          access key is 180 days old (max 90)
         > Fix: rotate key AKIA...KEY2 for user deploy-ci

--- Security Groups ---
  PASS   sg-0a1b... (default)            no sensitive ports open to the world
  FAIL   sg-1a2b... (web-servers)        port 22 (SSH) open to 0.0.0.0/0
         > Fix: restrict port 22 to specific IPs or remove the rule
  FAIL   sg-1a2b... (web-servers)        port 22 (SSH) open to ::/0 (IPv6)
         > Fix: restrict port 22 to specific IPv6 addresses or remove the rule
  PASS   sg-2a3b... (database)           no sensitive ports open to the world

--- S3 Buckets ---
  PASS   app-assets-prod                 public access block fully enabled
  PASS   company-backups                 public access block fully enabled
  PASS   website-static                  public access block fully enabled
  WARN   temp-data-uploads               no public access block configured
         > Fix: aws s3api put-public-access-block --bucket temp-data-uploads ...
  FAIL   temp-data-uploads               bucket policy allows public access (Principal: *)
         > Fix: remove the statement with Principal: * or restrict to specific accounts

--- CloudTrail ---
  PASS   management-trail                trail is actively logging
  FAIL   old-trail                       trail exists but logging is stopped
         > Fix: aws cloudtrail start-logging --name old-trail

============================================
               Summary
============================================
  Total checks:  16
  Passed:       7
  Warnings:     1
  Failures:     8

  Action required: 8 finding(s) need attention.

The scanner found 8 failures. Each one includes the resource, the problem, and a command or action to fix it.

What the Scanner Checks

Here is a summary of every check:

CheckWhat it doesFAIL condition
IAM-MFAConsole access and MFAConsole enabled, MFA disabled
IAM-KeyAgeAccess key ageActive key older than 90 days
SG-OpenPortSensitive ports openPort 22/3306/5432/6379 open to 0.0.0.0/0 or ::/0
S3-PublicBlockPublic access blockBlock not configured
S3-PolicyBucket policyPrincipal: * in an Allow statement
CT-StatusCloudTrail loggingTrail exists but IsLogging is false
CT-MultiRegionMulti-region trailNo active multi-region trail

Extending the Scanner

To add a new check, create a function that returns []Finding and call it from main(). Every finding needs a check name, resource identifier, issue description, severity, and fix suggestion.

For example, to add an RDS public access check:

func auditRDS(ctx context.Context, client *rds.Client) ([]Finding, error) {
    output, err := client.DescribeDBInstances(ctx, &rds.DescribeDBInstancesInput{})
    if err != nil {
        return nil, fmt.Errorf("describe DB instances: %w", err)
    }

    var findings []Finding
    for _, db := range output.DBInstances {
        if db.PubliclyAccessible != nil && *db.PubliclyAccessible {
            findings = append(findings, Finding{
                Check:    "RDS-Public",
                Resource: *db.DBInstanceIdentifier,
                Issue:    "database is publicly accessible",
                Severity: SevFail,
                Fix:      "modify the DB instance to disable public access",
            })
        }
    }
    return findings, nil
}

The pattern is always the same. Call an AWS API. Check the response for problems. Return findings.

What We Built

We started with individual AWS CLI commands to audit different parts of an account. Then we wrote Go code for each check, hitting real bugs along the way:

  1. IAM audit – pagination on MFA devices. Fixed with paginators.
  2. Security groups – missing IPv6 checks. Fixed by checking both IpRanges and Ipv6Ranges.
  3. S3 buckets – treating no-policy as insecure. Fixed by checking the specific error code.
  4. CloudTrail – not checking if logging is active. Fixed by calling GetTrailStatus.
  5. Access keys – time parsing pitfalls. Fixed by using the SDK’s built-in time types.

The final scanner runs all checks in sequence and produces a colored report with pass/warn/fail counts and fix suggestions for every failure.

The code uses only the AWS SDK v2 as an external dependency. It runs from any machine with AWS credentials configured. Point it at any account and it will find the common misconfigurations.

Keep Reading

Question

What AWS security checks would you add to this scanner?

Contents