What to Collect Before an AWS Security Review

A preparation guide for AWS teams that want a faster, cleaner security review with less scrambling for evidence.

An AWS security review is much more useful when the team has the right information ready.

The preparation does not need to be perfect. Many reviews start precisely because the AWS environment has grown faster than the documentation around it. But a small amount of preparation can make the review faster, reduce back-and-forth, and help the reviewer focus on the issues that matter most.

The goal is not to create a polished audit pack. The goal is to collect enough context to understand the environment, test the important assumptions, and produce practical remediation advice.

1. AWS account list and ownership

Start with the shape of the AWS environment.

Collect:

  • AWS account names and IDs;
  • what each account is used for;
  • which accounts are production, staging, development or shared services;
  • account owners or technical contacts;
  • AWS Organizations structure, if used;
  • root account email ownership;
  • regions in active use;
  • any accounts that may be legacy, unused or uncertain.

This helps separate intentional design from historical drift.

For small teams, the account map may be simple. For older teams, it may reveal test accounts, old production accounts, orphaned workloads or accounts no current engineer fully owns.

2. Production boundary and service map

A reviewer needs to understand what is business-critical.

Prepare a simple service map showing:

  • customer-facing applications;
  • APIs;
  • databases;
  • object storage;
  • queues and event systems;
  • load balancers;
  • container or server platforms;
  • important third-party integrations;
  • deployment pathways.

This does not need to be a beautiful architecture diagram. A rough diagram, spreadsheet or plain-English description is enough to start.

The important question is:

What would materially affect customers, revenue or sensitive data if it failed or was compromised?

3. Identity provider and access model

Identity and access are usually central to an AWS security review.

Collect:

  • how humans access AWS;
  • whether AWS IAM Identity Center, an external IdP or direct IAM users are used;
  • administrator role names;
  • who can assume privileged roles;
  • how MFA is enforced;
  • whether break-glass access exists;
  • how joiners, movers and leavers are handled;
  • third-party access arrangements;
  • CI/CD roles and service accounts;
  • any known old access keys or shared accounts.

Do not send passwords, private keys, access keys or secrets. The review needs to understand how access works, not receive sensitive credentials.

A useful access summary might look like this:

Access typeCurrent approachOwnerNotes
Human admin accessSSO role for named engineersPlatform leadReviewed quarterly
Break-glass accessRoot account plus emergency IAM roleCTONeeds clearer evidence
CI/CD deploymentGitHub Actions roleEngineering leadScope to be reviewed
Third-party accessVendor IAM roleOperationsExpiry date unclear

4. Logging and detection evidence

Logging is not only about whether a service is enabled. It is about whether the team can reconstruct important events and respond to alerts.

Collect notes or screenshots showing:

  • CloudTrail configuration;
  • whether CloudTrail covers all accounts and regions;
  • where logs are stored;
  • whether log storage is protected from easy deletion;
  • GuardDuty status and finding destinations;
  • Security Hub status and enabled standards, if used;
  • AWS Config status, if used;
  • alert routing to email, chat, ticketing or managed services;
  • who reviews security findings;
  • examples of recent findings or triage notes, with sensitive details removed.

The review should answer a practical question:

If something happened last week, would the team know where to look and who would respond?

5. Public exposure and network notes

Public exposure often accumulates over time. A review should distinguish intentional exposure from accidental exposure.

Collect:

  • internet-facing load balancers;
  • public API gateways;
  • public IP addresses;
  • security groups allowing inbound access from the internet;
  • public S3 buckets or bucket policies;
  • public snapshots or images;
  • exposed databases or admin interfaces;
  • VPN, bastion or zero-trust access arrangements;
  • DNS records for important services;
  • known legacy hosts or test systems.

A helpful preparation step is to list what should be public.

For example:

ResourceShould it be public?Reason
Main web application load balancerYesCustomer-facing application
Production databaseNoShould only be reachable from application tier
Admin dashboardNoShould be behind VPN or restricted access
Static asset bucketPossiblyDepends on bucket policy and CDN design

This makes accidental exposure much easier to spot.

6. Backup and recovery evidence

Backup settings are useful, but restore evidence is stronger.

Collect:

  • RDS backup configuration;
  • database snapshot retention;
  • AWS Backup plans, if used;
  • S3 versioning or replication settings, if relevant;
  • infrastructure state backup details;
  • application data backup notes;
  • restore test records;
  • recovery time and recovery point assumptions;
  • who can perform a restore;
  • any known gaps or untested assumptions.

A review should separate backup presence from recovery confidence.

A system can have snapshots and still be hard to recover under pressure if nobody has tested the process, documented the steps or confirmed access to the required credentials.

7. Secrets, keys and sensitive configuration

The review should understand how secrets are managed without collecting the secrets themselves.

Prepare notes on:

  • where application secrets are stored;
  • whether AWS Secrets Manager, SSM Parameter Store or another tool is used;
  • how secrets are rotated;
  • who can read production secrets;
  • whether secrets appear in environment variables, Terraform state, CI/CD systems or repositories;
  • how KMS keys are used for important data;
  • any known historical secret leaks or clean-up work.

Do not provide raw secret values. Good evidence proves the control without exposing the thing being protected.

8. Infrastructure as code and deployment workflow

If Terraform, CloudFormation, CDK or another infrastructure-as-code approach is used, include it in the review.

Collect:

  • relevant IaC repositories;
  • who can approve changes;
  • how production changes are deployed;
  • where Terraform state is stored;
  • who can access state;
  • CI/CD workflows and deployment roles;
  • how secrets are injected into deployments;
  • whether manual console changes are common;
  • any drift detection process.

If the environment is mostly manual, say so. That is not a reason to avoid the review. It simply changes the recommendations.

For some teams, the right first remediation step is to document the current baseline before trying to automate everything.

9. Customer, insurer or leadership questions

Many AWS reviews are triggered by external pressure.

Collect any recent:

  • customer security questionnaires;
  • enterprise procurement questions;
  • cyber insurance questions;
  • board or leadership concerns;
  • audit or compliance notes;
  • previous security findings;
  • incident postmortems;
  • risk register entries.

These materials help the review focus on what the business actually needs to answer.

For example, if customers repeatedly ask about privileged access, logging retention and restore testing, the review output should make those areas evidence-ready.

10. Known concerns and suspected gaps

Do not hide known weaknesses from the review process.

Useful context includes:

  • things the team already knows are too broad or messy;
  • old systems nobody is fully comfortable with;
  • temporary exceptions that became permanent;
  • previous findings that were never fully remediated;
  • tooling that is enabled but ignored;
  • unclear ownership;
  • areas where the team wants a second opinion.

A good review is not a blame exercise. It is a way to turn uncertainty into a manageable remediation plan.

What not to collect

Avoid sending sensitive material that is not needed.

Do not collect or share:

  • passwords;
  • private keys;
  • AWS secret access keys;
  • raw customer data;
  • unnecessary production logs;
  • full vulnerability exports with sensitive context unless agreed;
  • screenshots that expose secrets, tokens or customer information;
  • documents that include unrelated confidential business information.

Evidence should be scoped. The reviewer needs enough information to understand and assess the control, not a copy of every sensitive asset.

Who should be involved

A small AWS security review usually benefits from input from:

  • someone who understands AWS account structure;
  • someone who owns production operations;
  • someone who understands deployment workflows;
  • someone who can explain customer security requirements;
  • someone with authority to approve remediation priorities.

In a small SaaS company, that may be the same one or two people. That is fine. The important thing is that the review has both technical context and business context.

A practical pre-review checklist

Before the review starts, prepare:

AreaUseful material
AccountsAccount list, owners, regions, production boundaries
ArchitectureRough diagram, service map, data stores, customer-facing entry points
AccessSSO/IdP notes, admin roles, third-party access, break-glass process
LoggingCloudTrail, GuardDuty, Security Hub, AWS Config, alert routing
ExposurePublic resources, security groups, DNS, admin interfaces, legacy hosts
BackupsBackup policies, restore tests, retention, recovery assumptions
SecretsSecret storage approach, KMS usage, access to production secrets
IaC and CI/CDRepositories, state storage, deployment roles, change workflow
EvidenceCustomer questionnaires, insurer questions, previous findings
OwnershipTechnical contacts, risk owners, remediation decision-makers

What good preparation achieves

Good preparation does not mean every control is already mature. It means the review can start with context instead of discovery chaos.

The best outcome is a shorter path to useful findings:

  • fewer basic clarification questions;
  • better understanding of production risk;
  • more accurate prioritisation;
  • clearer evidence for customers or leadership;
  • more practical remediation recommendations.

An AWS security review should help the team move from vague concern to specific action. Preparing the right material before the review makes that much easier.