The Overlooked Six | AWS Security Blind Spots

Foreword & Guest Author Bio

As part of this ongoing series, SentinelOne presents a series of guest blogs from cloud security experts covering their views on cloud security best practices. Following on from Aidan Steele’s introduction to AWS security best practices, we now have a more detailed view from Don Magee, where he outlines his practical advice across six AWS security blind spots.

Don Magee is a senior cloud security engineer at DRW and an adjunct faculty member at Southwestern Michigan College. He has previously held cloud security roles at Allstate and AWS.

Introduction

In this post, we’ll dig into six of what I like to call “AWS security blind spots” – those often overlooked controls, techniques, or risks related to our cloud infrastructure. We’ll explore why they’re so easy to miss, why they matter, and, most importantly, how to address them. From the intricacies of Service Control Policies in multi-account strategies to the often-neglected principle of infrastructure immutability, we’ll cover ground that goes beyond the usual top ten misconfigurations.

So, if you’re a seasoned AWS architect looking to fine-tune your security posture or a cloud engineer just getting started in your career, this will help keep yourself, your co-workers, your cloud, and your customers safer.

Blind Spot 1 | Multi-Account Strategies and the Power of Service Control Policies

Many of the best security controls AWS provides are free. AWS Organization is one of the most foundational tools AWS provides. It is critical to get it right the first time. Yet, even after setting up a multi-account structure, many miss a crucial piece of the puzzle: Service Control Policies (SCPs).

The Critical Importance of Multi-Account Strategies

Before we dive into SCPs, let’s talk about why multi-account strategies are so crucial:

  • Isolation/Blast Radius Containment – By segregating resources across accounts, you limit the impact of security breaches or misconfigurations. If one account is compromised, others remain unaffected.
  • Granular Access Control – Different accounts can have different IAM policies, allowing you to tailor access based on team, environment, or function without complex, monolithic policies.
  • Billing and Cost Allocation – Separate accounts make it easier to track costs and allocate them to specific projects, teams, or departments. Yes, billing is indeed a security concern in my mind.
  • Compliance/Auditing – Multi-account setups make compliance easier and audits more straightforward.
  • Resource Quotas – Each AWS account has its own service quotas. Multiple accounts allow you to scale beyond single-account limits. Failure to manage service limits can lead to availability issues.

Designing an Account Strategy

A typical multi-account structure might look like this:

  • Management account – This account owns the organization. This is not the place to run your workloads, develop new ideas, or do anything if you can help it. This also acts as the root.
  • Production OU – Here are our production accounts. We will apply the most restrictive controls here. Dragons live here.
    • Production account 1-n – These accounts represent our products, initiatives, and other systems. Each development team may own an account.
  • Development OU – This is where the work happens. Each team or possibly each developer (depending on size) should have their own account so they do not run over each other. This allows us to also monitor cost. These accounts should never be able to interact with the production OU.
  • Test/QA/Staging OU – Whatever you call it, you need a place where code goes before we hit production. By separating this from our development environment, we can ensure the ongoing work will not affect our validation efforts.
  • Sandbox OU – A place for experimentation and learning. Typically, it has looser controls but stricter budget constraints.
  • Infrastructure OU – For shared services and resources used across the organization.
    • Network Account – Central management of VPCs, Transit Gateways, and other network resources.
    • Shared Services Account – For resources used across multiple accounts (e.g., centralized DNS, directory services).
  • Security OU – Our security tools will live here. We will want to separate our tooling by risk profile. Examples:
    • Logging account – We can store our cloudtrail, VPC flow logs, S3 logs, etc, here.
    • Tooling account – Here, we may run security tools, bought or built, that require us to self-host.
    • SSO account – We may consider delegating our Single Sign-On to a dedicated account.
  • Suspended OU – A place to put accounts that are waiting to be closed or are no longer active but are not ready to close. It restricts all actions.

Remember, this is not formulaic and will depend on your organization’s specific needs, size, and compliance requirements. The key is to design a structure that provides clear separation, aids in applying the principle of least privilege, and makes management and security enforcement easier.

Enter the SCP

SCPs are the secret sauce that turns a good multi-account setup into a great one. SCPs are JSON policies that you apply at the organization, OU, or account level to set maximum permissions for the entities in those containers. They act as a permission ceiling, complementing and overriding IAM policies where necessary. Here’s how SCPs enhance our multi-account strategy:

  • Enforce Organizational Standards – With SCPs, you can enforce standards across your entire organization from the top down. For example, you could prevent any account from creating public S3 buckets or require encryption for all EBS volumes.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyPublicS3Buckets",
"Effect": "Deny",
"Action": "s3:PutBucketPublicAccessBlock",
"Resource": "*",
"Condition": {
"StringEquals": {
"s3:PublicAccessBlockConfiguration": "false"
}
}
]
}
  • Implement Least Privilege at Scale – SCPs allow you to implement least privilege not just at the user level but at the account and OU level. For instance, you could restrict the Development OU to only use specific regions or services. A common use case here is to prevent the Suspended OU from taking any actions. Similarly, You can use SCPs to lock down your Management Account, ensuring that it’s only used for organization-level operations and not for running workloads.
  • Protect Critical OUs – For your Production OU, you can use SCPs to prevent any changes to critical infrastructure or security settings. This adds an extra layer of protection against accidental or malicious actions.
  • Streamline Compliance – SCPs can enforce specific regulatory requirements. For example, you could ensure that all actions in a HIPAA-compliant account are logged and that certain data never leaves specific regions. For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "EnforceEncryption",
"Effect": "Deny",
"Action": [
"s3:PutObject",
"ec2:RunInstances",
"rds:CreateDBInstance"
],
"Resource": "*",
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
},
{
"Sid": "EnsureLogging",
"Effect": "Deny",
"Action": [
"s3:PutBucketLogging",
"rds:DeleteDBInstanceAutomatedBackup",
"cloudtrail:StopLogging",
"cloudtrail:DeleteTrail"
],
"Resource": "*"
},
{
"Sid": "RestrictToUSRegions",
"Effect": "Deny",
"NotAction": [
"a4b:*",
"acm:*",
"aws-marketplace-management:*",
"aws-marketplace:*",
"aws-portal:*",
"budgets:*",
"ce:*",
"chime:*",
"cloudfront:*",
"config:*",
"cur:*",
"directconnect:*",
"ec2:DescribeRegions",
"ec2:DescribeTransitGateways",
"ec2:DescribeVpnGateways",
"fms:*",
"globalaccelerator:*",
"globalaccelerator:*",
"health:*",
"iam:*",
"importexport:*",
"kms:*",
"networkmanager:*",
"organizations:*",
"pricing:*",
"route53:*",
"route53domains:*",
"s3:GetBucketLocation",
"s3:ListAllMyBuckets",
"shield:*",
"sts:*",
"support:*",
"trustedadvisor:*",
"waf-regional:*",
"waf:*",
"wafv2:*",
"wellarchitected:*"
],
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:RequestedRegion": [
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2"
]
}
}
},
{
"Sid": "EnforceResourceTagging",
"Effect": "Deny",
"Action": [
"ec2:RunInstances",
"rds:CreateDBInstance",
"s3:CreateBucket"
],
"Resource": "*",
"Condition": {
"Null": {
"aws:RequestTag/PHI": "true"
}
}
}
]
}
  • Control Costs – SCPs can be used to restrict access to expensive services or instance types, helping to control costs in your Sandbox or Development OUs.
  • Enforce Security Best Practices – In your Security OU, you can use SCPs to ensure that security tools have the access they need while preventing any tampering with logging or monitoring services.

Think of SCPs as the guardrails that keep your accounts and OUs operating within defined parameters. Their true power is their ability to provide centralized, hierarchical control over your entire AWS organization. I can’t stress enough how critical a SCP strategy is to ensure your security, compliance, and operational best practices.

Blind Spot 2 | Overly Permissive IAM Policies

Identity and Access Management (IAM) is the cornerstone of AWS security, yet it remains a common source of frustration for security-conscious cloud engineers. The pet peeve here is not just the existence of overly permissive IAM policies, but the persistent misconceptions and lazy practices that lead to their creation.

The principle of least privilege is a fundamental tenet of security stating that an entity should have only the minimum levels of access necessary to perform its function. In AWS, this translates to crafting IAM policies that grant only the specific permissions required for a role or user to carry out their tasks. However, in practice, we often see policies that are far too broad, granting sweeping permissions across multiple services.

Many times we see these common IAM mistakes:

  • Using Wildcards Indiscriminately – The wildcard is powerful but dangerous. Using it carelessly (e.g., “Action”: “”) can grant unintended permissions across multiple services.
  • Neglecting Resource-Level Permissions – Many AWS actions support resource-level permissions, allowing you to specify exactly which resources an entity can act upon. Failing to use these leads to unnecessarily broad access.
  • Ignoring Policy Conditions – AWS offers a rich set of condition keys that can be used to add nuanced restrictions to policies. Overlooking these means missing out on powerful fine-grained controls.
  • Leveraging Managed Policies – While AWS-managed policies can be useful, blindly attaching them without understanding their contents can lead to overprivileged entities.

So what can we do?

  • Regularly audit IAM policies.
  • Implement a process for reviewing and approving IAM policy changes, ensuring that each permission granted is justified and documented. Use AWS Organizations and SCPs (as discussed previously) to set guardrails that prevent the creation of overly permissive policies at the account level.
  • Leverage infrastructure-as-code tools like AWS CDK or Terraform to define and version control IAM policies, improving the ability to review and manage permissions over time.

Effective IAM is not about making your life difficult. It’s about finding a balance to protect your resources without impacting your teams’ ability to work efficiently. This is a continuous and iterative process. By taking the time to craft precise, least-privilege IAM policies, you’re significantly reducing your attack surface and setting the stage for a more secure and manageable AWS environment.

Let’s look at an example, this policy is used by a web app:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*",
"sqs:*"
],
"Resource": "*"
}
}

We can easily see some risks associated with this policy:

  • It allows full access to ALL S3 buckets in the account, not just the one(s) the application needs. This means an attacker who gains control of this role could potentially:
    • Read sensitive data from any bucket.
    • Delete or overwrite critical data.
    • Use our S3 resources for malicious purposes (e.g., hosting malware).

We are also allowed unrestricted SQS access, which has other associated risks such as granting full control over ALL SQS queues in the account. An attacker who gains control of this role could potentially:

  • Read messages from unrelated queues, potentially exposing sensitive information.
  • Send malicious messages to any queue disrupting other systems.
  • Delete queues or purge their contents causing data loss or service disruptions.

How might we mitigate these concerns? We could adjust the policy to the permissions required. As an example:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3AccessForWebApp",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-app-bucket",
"arn:aws:s3:::my-app-bucket/*"
]
},
{
"Sid": "SQSAccessForWebApp",
"Effect": "Allow",
"Action": [
"sqs:SendMessage",
"sqs:ReceiveMessage",
"sqs:DeleteMessage",
"sqs:GetQueueAttributes"
],
"Resource": "arn:aws:sqs:us-west-2:123456789012:my-app-queue"
}
}

This least permissive policy gives us the following advantages:

  • By only allowing “s3:GetObject”, “s3:PutObject”, and “s3:ListBucket”, we limit the potential for abuse. The application can read, write, and list objects but can’t delete buckets, change bucket policies, or perform other potentially dangerous actions.
  • By restricting access to a single, specific bucket we ensure that in the event our credentials are compromised, an attacker can’t access or modify data in other buckets. Likewise, we limit access to a single, specific SQS queue.
  • By limiting ourselves to “sqs:SendMessage”, “sqs:ReceiveMessage”, “sqs:DeleteMessage”, and “sqs:GetQueueAttributes,” we allow the application to interact with messages but prevent it from creating or deleting queues, changing queue settings, or performing other administrative actions.

Ultimately, this ensures that if the application is compromised, the attacker’s ability to pivot to other AWS services is severely limited.

Blind Spot 3 | Mismanagement of Temporary Security Credentials

In the world of cloud security, the mismanagement of temporary security credentials is a subtle yet critical issue that often flies under the radar. This concern centers around the improper use and handling of short-term authentication tokens, particularly those generated by services such as the AWS Security Token Service (STS).

Temporary security credentials are a powerful feature in AWS, allowing for time-limited access to resources without long-term access keys. They’re crucial for implementing secure access patterns, especially in dynamic environments. However, their misuse can lead to significant security vulnerabilities.

Many organizations find themselves running into these common pitfalls:

  • Overextended Token Lifetimes – Setting excessively long expiration times for temporary tokens increases the window of opportunity for potential attackers if tokens are compromised.
  • Improper Storage and Transmission – Storing temporary credentials insecurely or transmitting them over unencrypted channels, risking exposure.
  • Lack of Monitoring – Failing to implement proper logging and monitoring for the use of temporary credentials, making it difficult to detect suspicious activity.
  • Misunderstanding of Token Refresh Mechanics – Incorrectly implementing token refresh logic in applications, leading to unnecessary privileges or application failures.

How to approach these issues:

  • Implement the AWS STS GetSessionToken or AssumeRole APIs with appropriate token lifetimes. For most use cases, tokens should be valid for hours, not days.
  • Leverage IAM roles for EC2 instances and ECS tasks instead of embedding long-term credentials. This allows for the automatic rotation of temporary credentials. For Lambda functions, rely on the execution role and avoid passing additional credentials to the function.
  • Implement proper error handling and token refresh logic in your applications to gracefully handle token expiration.
  • Use AWS CloudTrail to monitor the issuance and use of temporary credentials. Set up alerts for suspicious patterns, such as high-volume token requests or use from unexpected IP ranges.
  • When federated access is required, implement proper integration with identity providers and use technologies such as SAML authentication to manage access across accounts.
  • Educate and socialize best practices with developers and operations teams.

Putting This Into Practice

Consider a scenario where a developer builds a web application that needs to access an S3 bucket. They may make a mistake in their configuration such as the following:

import boto3
import requests

def get_temporary_credentials():
# Retrieve temporary credentials from a custom endpoint
response = requests.get('http://unsafe-creds.mycompany.com/temp-creds')
creds = response.json()
return creds

def upload_to_s3(file_name, bucket, object_name=None):
# Get temporary credentials
creds = get_temporary_credentials()

# Create S3 client with temporary credentials
s3_client = boto3.client( 's3',
aws_access_key_id=creds['AccessKeyId'],
aws_secret_access_key=creds['SecretAccessKey'],
aws_session_token=creds['SessionToken']
)
if object_name is None:
object_name = file_name
# Upload file

s3_client.upload_file(file_name, bucket, object_name)

# Usage
upload_to_s3('example.txt', 'my-bucket')

This approach has a few flaws:

  • Insecure Credential Retrieval – Fetching credentials over HTTP is insecure.
  • No Error Handling – There’s no attempt to handle credential expiration or request failures.
  • Manual Credential Management – This approach is error-prone and doesn’t handle credential rotation.
  • Potential for Long-lived Credentials – There’s no guarantee that the credentials are short-lived.

A stronger approach would be to modify our code as follows:

import boto3
import logging

from botocore.exceptions import ClientError

# Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

# Use the boto3 Session handle credentials
session = boto3.Session()
s3_client = session.client('s3')
def upload_to_s3(file_name, bucket, object_name=None):
if object_name is None:
object_name = file_name
try:
s3_client.upload_file(file_name, bucket, object_name)
logger.info(f"Successfully uploaded {file_name} to {bucket}/{object_name}")
except ClientError as e:
logger.error(f"Failed to upload {file_name}: {e}")
return False
return True

# Usage
success = upload_to_s3('example.txt', 'my-bucket')
if not success:
logger.warning("Upload failed, please check logs for details")

How did we improve on our previous attempt:

  1. Automatic Credential Management – Using boto3.Session() automatically handles credential retrieval and rotation when used with EC2 instance or ECS task roles.
  2. Error Handling – The code now handles and logs errors, including those that might be related to credential issues.
  3. Logging – Proper logging is implemented for monitoring and auditing.

But we still haven’t addressed all the issues. Let’s go a bit deeper.

IAM Instance Roles

We should ensure our ECS task or EC2 instance has the least permissive role.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}

This directly leads us to the next blind spot.

Blind Spot 4 | Improper Monitoring

In the previous blind spot, we made some improvements, but there is still one more place to improve. We need to build some alarms to monitor for potential abuse. This would vary by the application, but using our previous example we would probably build the following monitors:

Unusual S3 Access Patterns

  • What to Monitor – The frequency, volume, and type of S3 operations.
  • Why It Matters – A compromised server might be used to exfiltrate data to unauthorized S3 buckets or tamper with existing data.
  • Potential Indicators of Compromise:
    • Sudden spikes in the number of S3 PutObject or upload operations.
    • Attempts to access S3 buckets or objects outside the server’s normal operational scope.
    • Unusual deletion operations (DeleteObject, DeleteBucket) are not typically part of the application’s function.

Changes in Network Traffic Patterns

  • What to Monitor – Network traffic to and from the EC2 instance or ECS task.
  • Why It Matters – Compromised servers often establish connections to command and control servers or attempt to exfiltrate data. Potential
  • Indicators of Compromise:
    • Unexpected outbound traffic to unknown IP addresses.
    • Sudden increases in data transfer volume, especially egress traffic.
    • Communication attempts to known malicious IP addresses or unusual geographies.

Attempts to Escalate Privileges

  • What to Monitor – API calls related to IAM and security settings.
  • Why It Matters – Attackers often try to escalate privileges to gain broader access to AWS resources.
  • Potential Indicators of Compromise:
    • Attempts to modify the instance’s IAM role.
    • API calls to create new IAM users or roles.
    • Efforts to modify security group rules or network ACLs.

Anomalies in Application Behavior

  • What to Monitor – Application-specific logs and metrics.
  • Why It Matters – Changes in application behavior could indicate that an attacker has modified the application or may be using it in ways unintended.
  • Potential Indicators of Compromise:
    • Unexpected application restarts or crashes.
    • Unusual patterns in application-level logging.
    • Deviations from normal API call patterns (e.g., sudden use of S3 APIs not typically used by the application).

Implementing effective monitoring may involve several tools:

Cloud Native Application Protection Platform (CNAPP)

  • Purpose – CNAPPs provide comprehensive security for cloud-native applications throughout their lifecycle.
  • Application in This Scenario:
    • Continuously scan EC2 instances and ECS tasks for vulnerabilities.
    • Detect and alert on anomalous behavior in your application’s interaction with S3.
  • Benefits – Offers a unified view of risks across your cloud-native application stack.

Cloud Security Posture Management (CSPM)

  • Purpose – CSPM tools help maintain a secure cloud configuration and ensure compliance.
  • Application in This Scenario:
    • Regularly assess IAM roles associated with EC2/ECS for overly permissive policies.
    • Monitor and alert on changes to security group rules that might expose your instances.
    • Ensure S3 bucket policies and ACLs are configured according to best practices.

Security Information and Event Management (SIEM)

  • Purpose – SIEM systems collect, analyze, and correlate security event data from various sources.
  • Application in This Scenario:
    • Aggregate logs from EC2/ECS instances, S3 access logs, and cloud provider logs.
    • Create custom correlation rules to detect potential compromise scenarios.
    • Set up alerts for unusual patterns, such as spikes in S3 write operations or unexpected admin actions.

Network Detection and Response (NDR)

  • Purpose – NDR tools monitor network traffic to detect and respond to threats.
  • Application in This Scenario:
    • Analyze network flows to and from EC2 instances or ECS tasks.
    • Detect unusual outbound connections that might indicate command and control activity.
    • Identify abnormal data transfer patterns to S3 buckets.

Regular Security Assessments – Conduct periodic vulnerability assessments and penetration testing on your EC2 instances or ECS tasks.

The key is to establish a baseline of normal behavior for your specific application and alert on significant deviations from this baseline.

Blind Spot 5 | Ignoring the Principle of Immutability in Infrastructure

In the dynamic world of cloud computing, immutable infrastructure has gained significant traction. However, many organizations still overlook its importance, creating a critical blind spot in their security strategy.

What is Immutable Infrastructure?

Immutable infrastructure is an approach to managing services and software deployments where components are replaced rather than changed. Once deployed, an immutable component is never modified. If changes are needed, new components are provisioned and the old ones are decommissioned. This gives us several advantages:

  • Reduced Configuration Drift – Immutable infrastructure eliminates the risk of configuration drift, where systems gradually deviate from their intended state due to ad-hoc changes.
  • Improved Consistency – Treating infrastructure as code and using automated deployment processes ensures consistency.
  • Enhanced Security – Immutable infrastructure makes it easier to maintain a known, secure state and quickly roll back to a previous version if issues arise.
  • Simplified Auditing – With immutable infrastructure, you can easily track changes and maintain a clear audit trail of your environment.

In a nutshell, this means engineers do not create manual changes on EC2 instances, they don’t modify ECS tasks in place, and they don’t modify Lambda functions in the console (among other things).

To accomplish this we need to discuss a few best practices:

  • Use Infrastructure as Code (IaC) – This is the most critical approach to a successful immutable infrastructure. Leverage tools like AWS CloudFormation, CDK, Pulumi, or Terraform to define and version all deployments.
  • Implement Blue-Green Deployments – Use this strategy to deploy new versions of your application while keeping the old version running, allowing for easy rollbacks.
  • Leverage Auto Scaling Groups – Use launch templates or launch configurations with Auto Scaling groups to ensure new instances are always launched with the latest configurations.
  • Utilize Containerization – Use container technologies and orchestration platforms such as ECS or EKS to package and deploy immutable application components.
  • Embrace Serverless Architectures – Take advantage of serverless services when appropriate to reduce complexity.

Overcoming Challenges

Adopting immutable infrastructure practices requires organizational change and is especially difficult when migrating legacy systems. However, the benefits far outweigh the initial investment. Start by identifying small, manageable components of your infrastructure to make immutable, and gradually expand from there.

By addressing this blind spot and embracing these principles, organizations significantly enhance their security posture, improve operational efficiency, and reduce the risk of configuration-related vulnerabilities.

Blind Spot 6 | Overlooking Security Implications of Serverless and Container Technologies

Cloud-native technologies such as serverless computing and containers create a significant blind spot: Underestimating the unique security challenges inherent to these technologies. This oversight can lead to vulnerabilities that traditional security approaches may not address.

The Shift to Serverless and Containers

Serverless computing such as AWS Lambda and container technologies like Amazon ECS or EKS provide numerous benefits, including scalability, cost-efficiency, and faster deployment. However, they also introduce new security considerations that differ from traditional infrastructure models.

When considering a serverless approach the following concerns must be top of mind:

  • Function Isolation – Ensuring proper isolation between functions is crucial. This involves using resource-based policies, strict IAM roles, and ensuring that functions do not share execution environments unnecessarily. By properly isolating functions, you minimize the blast radius of any potential compromise, meaning that if one function is exploited, it won’t affect others in the same environment.
  • Secret Management – Leveraging proper tools for storing secrets ensures that sensitive information like API keys or database credentials are not leaked via environment variables or SCM commits.
  • Dependency Management – Serverless functions and containers often rely on numerous third-party libraries. It is important to leverage tools to ensure all libraries are up-to-date and free of known exploits. There are many third-party tools that can be leveraged to accomplish this task. Ensuring your supply chain is secure is paramount to any successful serverless/container project.
  • Stateless Nature – Containers and serverless functions were designed to be stateless. Move your state to services that are long lives such as database services, caches, and session management techniques.

Best Practices for AWS Serverless and Container Security

  • Implement Least Privilege Access – Use AWS IAM roles with minimal permissions for Lambda functions and ECS tasks. Regularly audit and refine these permissions.
  • Secure Secret Management – Leverage AWS Secrets Manager or Parameter Store to securely store and retrieve secrets. Do not hardcode sensitive information in your code or environment variables.
  • Enable AWS CloudTrail – A repeating theme, ensure comprehensive logging of all API calls and changes to your serverless and container environments.
  • Use a CSPM – Implement custom rules to continuously monitor and enforce security best practices in your serverless and container setups.
  • Implement Strong API Security – Use proper authentication and authorization mechanisms. Implement rate limiting and throttling to prevent DDoS attacks.
  • Containerize Securely – Use minimal base images to reduce the attack surface. Implement image scanning in your CI/CD pipeline to detect vulnerabilities before deployment.
  • Leverage AWS Web Application Firewall (WAF) – Protect your serverless APIs and containerized applications from common web exploits.
  • Implement Function Monitoring – Use monitoring tools to watch your functions or containers and set up alerts for anomalies. Consider third-party tools specialized in serverless security monitoring.
  • Regular Security Assessments – Conduct penetration testing specifically tailored for serverless and container environments.

Finally, to effectively address this blind spot, we have to consider the following:

  • Educate Development Teams – Ensure engineers understand the unique security implications of serverless and container technologies. This includes training on best practices for least privilege access, proper use of IAM roles, securing API gateways, and understanding the attack surfaces unique to serverless and containerized environments. Developers should also learn about topics like event injection vulnerabilities, managing secrets, and the importance of isolating workloads to minimize the impact of potential breaches.
  • Update Security Policies – Revise existing security policies to include specific guidelines targeted at serverless and container security.
  • Integrate Security in CI/CD – Implement automated security checks in your CI/CD pipelines, including vulnerability scanning for functions and containers. These checks should include static code analysis, secret scanning, dependency checking, and infrastructure-as-code security validations. Integrate tools that ensures that vulnerabilities are caught early in the development cycle.
  • Embrace DevSecOps – Foster collaboration between development, operations, and security teams to address security throughout the application lifecycle. This includes embedding security practices into every phase of the SDLC. Encourage shared responsibility for security, automate security checks in CI/CD pipelines, and ensure that security teams work alongside developers to identify and mitigate risks early. By integrating security as a core part of the workflow, teams can address vulnerabilities proactively rather than reactively.
  • Stay Informed – Keep up with the latest serverless and container security best practices, as this field is rapidly evolving.

Finally, hands-on workshops and real-world examples can significantly enhance an engineer’s understanding and help them adopt security-first thinking in their code.

By recognizing and addressing the unique security challenges of serverless and container technologies, organizations can harness their benefits while maintaining a robust security posture in their AWS environment. This proactive approach is essential in mitigating risks and preventing potential security breaches in these modern, dynamic infrastructures.

Conclusion | The SentinelOne Perspective

Securing any environment requires a continuous commitment to understanding and addressing potential blind spots. Being proactive is critical to designing effective security solutions.

Security is not a one-time thing, but an ongoing practice that evolves with technology and threats. Like fitness, it can’t be stored, only improved. Educating teams, integrating security into CI/CD, and fostering a culture of security first is key to maintaining a resilient environment. By embracing these practices, you can ensure that your environments remain robust, secure, and capable of adapting to future challenges.

As always, Cloud Security is a shared responsibility. While cloud providers secure the underlying infrastructure, it’s crucial for developers and operations teams to ensure they secure their functions, data, and access policies. SentinelOne’s recommendation is to leverage Well Architected Frameworks where possible, to ensure environments are built with secure-by-design principles.

Additionally, Cloud Native Application Protection Platforms, like SentinelOne’s Cloud Native Security, can assist cloud and security teams by detecting and prioritizing cloud risk. Our CNAPP is able to detect misconfigurations across cloud services, infrastructure and cloud identity, as well as vulnerabilities, and provides evidence-based insight into cloud risks that can be externally exploited.

If you and your team are interested in SentinelOne’s view of your cloud health, you can reach out to one of our cloud experts for a Cloud Security Assessment.

Next in our guest blog series, cloud security consultant and writer Rami McCarthy is covering his approach to assessing cloud vulnerabilities.

Singularity™ Cloud Security
Discover all of your assets and deploy AI-powered protection to shield your cloud from build time to runtime.