Foreword & Guest Author Bio
As part of this ongoing series, SentinelOne is excited to present a series of guest blogs from cloud security experts covering their views on cloud security best practices. Following on from blogs from Aidan Steele and Don Magee who both focused on best practices to building securely in the cloud, we now have Teri Radichel addressing how teams can avoid faulty logic while creating security policies for the AWS environments.
Teri Radichel provides penetration testing services through her company, 2nd Sight Lab. She started programming in 1981 and has since obtained a Master of Software Engineering, Master of Information Security Engineering and 13 security certifications including a GSE. She’s an AWS Security Hero, IANS faculty member, author on Amazon, and has presented at conferences around the world including AWS re:Invent, AWS re:Inforce, and RSA. She also received the SANS Difference Makers in Security Award in 2017. You can follow her @teriradichel and find out more on her LinkedIn profile.
Introduction
When people configure AWS, they often make various individual mistakes such as failure to add MFA to user accounts, exposing resources publicly that should be kept private, or failure to use network rules that allow only the necessary traffic. Those individual misconfigurations or lack of security controls add up as each one increases the overall risk in an AWS account.
However, the bigger problem I see is that when people address the security problems they find, they fail to address security issues in a manner that truly reduces the risk and attack surface in their AWS environments. Many times, these proposed ‘solutions’ fail to fix security problems or, even introduce more problems. The reasons for this include:
- Faulty logic – Creation of policies and solutions that block certain actions while leaving gaps that attackers can easily exploit, making them ineffective.
- Failure to address the security problems at an architectural level – Focusing on singular issues rather than addressing them at an architectural or process level leads to weak security solutions.
- Failure to consider the overall attack surface – Failure to consider the overall attack surface leads to solutions that address one security problem but end up creating others.
Addressing The Risk of Faulty Logic
Here’s an example of faulty logic: People implement AWS IAM policies to enforce MFA but add boolifexists in the MFA condition. While this used to be a recommendation from AWS, it has since been removed after I wrote about why it does not work. If you only enforce MFA when it exists, someone can simply create a request without MFA to bypass your policy. Your MFA condition has little effect in that case.
Addressing Problems at the Architectural Level
Originally, the justification for adding boolifexists to policies was due to the fact that you can’t use MFA with AWS access keys. The problem was not addressed at the architectural level. A mechanism does exist for using MFA to assume a role when using the AWS CLI. When a user carries out that action, MFA is present in the request and a condition can reliably enforce MFA.
Here is an alternate solution to enforce MFA in your account: Limit the IAM policies for users with AWS access keys to only allow them to assume a role with MFA. Require MFA in the trust policies for the IAM roles the users are allowed to assume. No exceptions. No boolifexists. No bypass of your MFA policy. That solution considers options outside of a single IAM Policy to more effectively solve the problem. But, this solution is still incomplete from a security architecture standpoint.
You can fully solve the problem by:
- Adding network restrictions to the IAM policy to limit role assumptions to a specific network which limits access in the event of stolen credentials.
- Using an external ID when granting access to external accounts to protect against the confused deputy attack.
- Limiting access to the specific AWS accounts that need to use the role in IAM policies and trust policies for each role.
- Using a service control policy to provide similar restrictions at the organizational level.
- Protecting credentials that are used to perform role assumptions by storing them in AWS Secrets Manager.
- Encrypting secrets with a customer managed KMS key that has a policy which only allows authorized users to decrypt the secret.
- Using separation of duties for administration of service control policies, IAM policies, KMS policies, and networks.
- Enabling and securing logs.
- Ensuring you can identify misuse of credentials and respond appropriately.
With that, you have a broader architectural solution that solves the problem more effectively and achieves greater risk reduction than a single change to an IAM policy.
Considering the Overall Attack Surface
To truly address a security problem, you need to understand how attacks work across all aspects of your environment. Otherwise, you might come up with a solution that solves one problem but opens the door for other attacks. Combined with faulty logic, you may introduce the potential for a subtle attack which is hard to identify or block when it occurs.
Here’s an example of assessing the solution to a security problem: The primary source of data breaches on AWS at the moment is unauthorized use of AWS developer keys used to programmatically access AWS APIs. Since stolen keys are the problem, one short-sighted solution would be to get rid of all the AWS access keys. Problem solved, right? Let’s think about that for a minute.
If you get rid of those credentials, how will the authentication and authorization process work when making AWS API calls? When you get rid of the keys you’re really just moving the problem rather than solving it. You still need some mechanism for authenticating and authorizing AWS API calls. Whatever method you choose to replace AWS access keys may have just as many or more problems.
Advertising that you have “gotten rid of AWS access keys” doesn’t tell me anything about your security. Along similar lines, if you’re going to implement a passwordless solution, you need to investigate what is replacing the passwords and all the ways those replacements may be attacked. Something has to be used to authenticate a user to a system, and if that something can be stolen but cannot be rotated (like your face ID or a fingerprint) then that’s not better than a password. It’s possibly worse. I’ll let you explore all the ins and outs of passwordless. For now let’s get back to AWS access keys.
OIDC (OpenID Connect) is an alternative to using AWS access keys when connecting to third-party systems such as GitHub. The authentication process provides short-lived session tokens rather than using long term access keys. If an attacker can get access to the point where you initiate the session though, it’s no different than having the long term credentials. The attacker can simply continuously request a new token. Incorporating well-designed MFA into the authentication process can prevent attackers from initiating new sessions.
Some MFA solutions involve using a web browser with the AWS CLI. Instead of entering a code in a terminal window when executing a command, you get redirected to a browser to enter your code and then your action is allowed. What new attack surface exists with this solution?
Now, attackers can leverage browser vulnerabilities to steal the code. Chrome has had several zero day vulnerabilities so far this year. During penetration testing, I commonly ‘steal or abuse’ credentials using web application vulnerabilities like XSS, CSRF, and SSRF to name a few. Incorporation of third-party code with an improper CORS policy into a web application may facilitate stolen credentials.
The OAUTH device code flow commonly used for entering MFA codes via a browser is often the target of phishing attacks. A user is tricked into entering their code into a lookalike website. Users can’t be phished too easily when entering a code into a terminal window.
Leveraging a Yubikey via a browser is the only MFA method at the moment that can’t be stolen by a phishing attack using tools such as Evilginx. However, what if you want to enter a second factor in the cloud on an EC2 instance? You can’t use a Yubikey with a browser on an AWS EC2 instance. You’ll have to enter a code. If you only need the Linux command line, adding a UI and browser increases the attack surface.
You can programmatically generate an MFA code with a Yubikey, but you’ll need to install the Yubikey CLI which can also administer the Yubikey. If an attacker gets onto a developer’s machine, they may not have permission to install new software but they can leverage the CLI to tamper with the Yubikey.
You can use Google Authenticator to obtain an MFA code. Unfortunately, Google has decided it would be a good idea to allow you to export the seeds from your application rather than ensure that only your device has that seed and can be trusted as the sole source of that MFA token. What that means is that any malware that gets onto your device with enough access can export your MFA seeds and use them on their own device. Rotate your seeds and secure the device where you have installed Google Authenticator.
This exercise demonstrates how to consider your attack surface and how that changes as you adjust your security. Will your solution increase or decrease your overall risk? Instead of focusing on a singular problem in a vacuum, consider that problem in the broader context of your architecture and processes. Understand attacks. Beware of faulty logic. Remember to consider how your security team monitors your systems as you implement your security solutions as that is part of your overall ability to manage threats to your AWS environment.
Conclusion | The SentinelOne Perspective
This post underscores a critical truth: In cloud security, good intentions without architectural discipline often create more vulnerabilities than they resolve. The evolving attack surface of cloud and container environments, combined with the cloud service providers’ shared responsibility model of cloud security, can often be a steep hill to climb alone. Poor assumptions and half fixes not only fail to mitigate risk, they reshape the attack surface in unpredictable ways.
At SentinelOne, we believe that effective cloud security is about reducing the opportunity for compromise via an evidence-based approach to prioritizing risk and being ready for real-time threats as they inevitably occur. We also believe that cloud security must be both proactive and autonomous – all rooted in visibility, context, and include the ability to detect and respond at machine speed. Through Singularity Cloud, SentinelOne enables organizations to:
- Prioritize what really matters with verified exploitable validations of cloud risk,
- Correlate activity across cloud endpoints, native cloud logs and on premise environments, exposing subtle risks that evade traditional monitoring,
- Respond autonomously, eliminating threats before they move laterally or escalate privileges,
- With the ability to interrogate your cloud environment conversationally and rapidly build repeated workflows/playbooks for efficient continuous cloud health.
As cloud attack surfaces grow increasingly complex, SentinelOne is here to help you evolve your cloud security posture. Evolve your security beyond static checks and endless noise, towards unified enterprise protection that is proactive and autonomous.