Changing Directions

This blog has served me well over the last few years.  However, for reasons of my own sanity, I’ve decided to consolidate my blogging efforts on blogger. Furthermore, this name and title is no longer the best representation of who I am and where I am at.  This started as a developer blog, writing on developer things.  As I’ve formed a more solid identity as a security engineer & penetration tester– this place doesn’t feel like my own skin.

That doesn’t mean I will not be blogging.  To the contrary, I am going to be contributing more in other places.  My new personal blog,  “Fuhyô no Michi” (不評の道), can be found here:

I also am now VERY excited to be contributing technical posts to Attack Research / Carnal0wnage.  These guys have impressed me with their technical skill and insight for a while– I hope to be able to keep up with that tradition.  That blog can be found here:

Finally, I am soon to be blogging with another group (not yet announced).  I will update this post to include those details once they are available.  This blog will remain available for historic reasons.

Thank you for bearing with me.


Crossfit: Forever Strong

Let me first start this post off with an assurance I haven’t forgotten the series.  I’ve been working on a few articles, however I keep finding vulnerabilities in things I wanted to show, so I have to wait till i can find something I can.

However, since those articles aren’t ready for the public yet, I wanted to write about something else important to me– my family’s health.  A few months ago I reached out on the tweeters for ideas on a “bootcamp” style work out program.  One of my coworkers (dasfiregod) answered that crossfit might suit my needs.  It did, and then some.

I did a quick search online for gyms near us and found a handful.  I’ve always been interested in a military style/cross functional program– and this struck a nerve right away.  I visited one of the gyms, and immediately fell in love with the place.  I forced convinced my wife to join as well– and such our journey began.

Fast forward six months later.  I personally have lost 20 pounds since starting.  My wife has had similar results and hasn’t been at her current weight since high school.   My body fat percent was somewhere around 28% or so when I first started– it is 19.2% now.  The results speak for themselves.

But it’s more than just the results– the gym I go to (crossfit forever strong) has become like a 3rd family.  I wish I could explain this better, but the people there genuinely get excited and care about you and your growth.  Many of the people even go to my church– one of which was in a preparation for marriage course 5 years ago.  I have never been excited to go to a gym… until now.

So– there is hope.  I am not unique, but I thought of some tips to share:

  • Water is important.  I drink about 1 gallon of water a day now.
  • Your mind wants to play tricks on you.  You ARE capable– let your body do the talking.
  • If that’s not helpful, use music.  It’s the lubricant that lets me ignore the evil noises in my head.
  • Paleo diet is a great portion of our success.  It took a bit to transition, and we aren’t zealots.  I put cream in my coffee.
  • Hard work and amazing trainers (Jason & Sally) are the other portion of our success.

Causalities of Errata

To preface the rest of this article, I am aware that I am upset right now.  I try not to write when I am like this, however the BSides drama has already caused a potential sponsor to back out.  This might leave me and some others with large bills to pay.  I know that God provides, but I am disappointed with the decisions of others.

Also to be clear, I don’t want to be involved with any of the shenanigans in this.  I don’t want the local BSides conference we are organizing to be involved in this.  But we already are– so I am weighing in publicly.  First and foremost– I don’t know who is right or wrong in all of this.  The issues presented, to me, come across as personal problems with a few people.  That doesn’t mean there isn’t some truth to the errata article, but it also doesn’t make MikeD a charlatan because he isn’t “open minded” and won’t reply to someone’s email.  There are some legitimate concerns about non-profit status and transparency, but there are mountains and there are mole hills– this is a mole hill.

The way the evidence has been presented is irresponsible.  There are better ways to have brought these concerns up that would have been less damaging.  If mommy and daddy need to fight, they should do it in private and not in front of the kiddos.  The public fashion this was done in has already negatively impacted these conferences– despite the reality that this could all just be a misunderstanding.

All that said, I want to offer some assurances to our sponsors and everyone as a whole.  First as foremost, we intend to make no money off of the local BSides Phoenix event.  Money is being collected through my personal company mostly as a legal shield.  I am willing to disclose, upon request, all of the funding details and receipts of expenses.  These are already being communicated internally to the group.

In the case that any extra money is left over (which at this point is seemingly less likely)– the remaining balance will be donated to a charity such as EFF or Hackers for Charity.  That will also be made public upon request.

I am not asking you to trust BSides, I am asking you to trust me.  This event is to be for the local community’s benefit.  Please don’t let this drama detract you from supporting us in our project to do so.

Grammar: The Stuff of Exploits

Communicating clearly can be difficult.  Consider the following sentence:

The police officer and bandit pulled their triggers.  Shots were fired, and he went down.  He breathed his last breath.*

This sentence is a legitimate use of language– however it is awkward because of an unclear antecedent.  Who shot who in this sentence?  Did the police officer shoot the bandit? Did the bandit shoot the police officer?  The reader is left to make up his/her mind on how to handle this.

Written or spoken, clarity of language is accomplished by removing such ambiguities.

But I am not in the business of copy editing.  My job is to lie to computers and people, as a means to circumvent filters and exploit weakness.  Because people write software, it should be unsurprising that similar types of ambiguities can be used against applications.

Consider the following URL scheme**:;

If you use this URL in internet explorer you are taken to, if you use this URL in most other browsers you are taken to  This is because of the same type of issue we faced with our poorly written sentence.  It is perfectly valid syntax, however it is ambiguous to intent and causes the reader (aka: browser) to make a decision as to what was intended.

Languages support these types of clauses because, when used properly, they can be useful.  But when ambiguous situations arise, it is nearly impossible to make “the right” decision.  You can, at best, make a decision– but who can say if it was a good one.

If you think you have it right– look back again at the first sentence.  Because of the way this information is presented, our mind is drawn into the assumption that the police officer and bandit shot each other.  But why?  In context, it is entirely possible that they both shot at someone else entirely.  When you add scope into the mix, context can change in radical and unpredictable ways.

This reality is horrible for anyone who is trying to identify “malicious” data.  You don’t often know what is malicious until it is too late, and can’t exactly not permit language clauses that are useful.  These leaves lots of room to shuck and jive– often to your detriment.

Welcome to grammar.


(I really am not a copy editor, nor do I have one.  If there are mistakes with this post– please be kind.)

* Example based on the book, “It was the best of sentences, it was the worst of sentences” by June Casagrande
** Example borrowed with permission from the amazing book, “Tangled Web” by Michal Zalewski

Reverse Engineering Web Apps: Architectural Composition

Don’t worry, give it 10 years and you will be an overnight success.  – K. Slatoff

Since our process of reverse engineering relies heavily on pattern matching, being capable of identifying and decomposing architecture is a critical skill.  Unfortunately, there aren’t very many short cuts here.  I personally feel as though this skill is one of my greatest strengths, but it took 9 years or so of developing software to get here.

In spite of that you still need to be familiar with common patterns to do the real work of web application reversing & penetration tests.

Compiled binaries have a bit of a leg up on us here.  When you download an application the file format is generally fairly easy to determine.  This gives you some very key insights into how an application works, where data is stored, and its structure.  This is not true of the web.

Luckily for us, developers have a penchant for reusability.  This means that their applications are built on top of frameworks, leverage shared components, and are most often structured in known/public ways.  Patterns and algorithms are the cornerstone of proper engineering.  Which is also great because even if an ‘engineer’ isn’t proper– they still rely on things which are.  If you’re using MVC.Net to build your application, regardless of your skill level, you have to go out of your way to not use MVC.  This is true for all other frameworks as well.

Web Patterns

One of the best resources I’ve found for this is Martin Fowler’s Patterns of Enterprise Application Architecture book.  Subsequently, he has published a briefing on many patterns here:

Since we are discussing web apps, the web application presentation patterns are of most interest.  Read up on all of them, but in particular I find that 3 patterns are most popular.

Page Controller 

In this pattern, the page itself is the controller– which is just a fancy way of saying it’s responsible for binding the model (core application data) to the view (user interface presentation).  This is pretty easy to spot as the page name is the action it wishes to perform (such as:,, etc…)

In this pattern, I treat each page as its own API since the ProductEdit page is likely to expect a whole different set of parameters than ProductDelete.  For all intent and purpose, each page is a silo– loosely communicating with each other through querystring, cookie, or POST parameters.

Front Controller

A front controller is a somewhat similar pattern.  The page itself is a type of controller, except that it mostly operates as a router of commands.  Drupal and WordPress work this way, despite their ability to appear as MVC.

In this pattern, you see pages like:


This is either applied broadly (such as an index.php page) or more specifically to a functional area. (such as product.php?action=edit).

In either case, it’s also fairly straight forward to decompose.

The scope of the API to communicate with these types of applications is based on the scope of the controller.  In a global scope, index controller has to support all of the parameters that could come through it.  Though these commands may be ignored, the general size of the API is often fairly large.  In the more focused scope, the API is usually more focused.  It is not uncommon to be able to call admin commands from a less-than-admin controller if the ACL on the commands is not setup correctly.  It is also easy to guess that there might be an action=edit if you see lots of action=view type commands.

Model View Controller (MVC)

In this pattern the URL structure is more than a resource locator– its a syntax for communication (also referred to as RESTful).  In this pattern you have a clear abstraction of the view, the model and the controller.  This usually looks something like:


There are of course variants of this syntax, for instance a default action and default controller could be used and allow for a call like:

/Products/id == returns the view action for the id.
/id == returns the view action of products by id

This pattern also creates some interesting dynamics as far as composition is concerned.  Consider, that while the latter call will work, you might ALSO be able to call this page by doing /Views/Products/Edit.aspx and POST and ID to the page.  This can create interesting side effects if permissions are not set correctly (especially for partial views and JSON results).

This pattern has become super popular among many frameworks.  Ruby, Python, MVC.Net, Spring, Struts, etc… all use this pattern primarily for their web applications.

Notable “Stuff”

The aforementioned patterns are considered “enterprise patterns” specifically related to architecture.  Component patterns (or design patterns) are also important to understand since they are how individual components are built.  Since this post is already somewhat long, we will talk more about component based composition discovery next time.

The rest of the ‘stuff’ below represent patterns which fall into categories less easily spotted on a webpage– but are useful in figuring out how something works. Unless a developer mistake short cuts this process (such as an exception with a full stack trace), you can only reliably get an understanding about these components through interaction.

Data Access Patterns

There are three means of data access which are important to have some exposure to.  This is more useful to note if you have SQL injection, but can be helpful in identifying points in the application which MIGHT be vulnerable.

These three access patterns are: string concatenation (aka: evil), parameterized queries (most common), and stored procedures.

There are very subtle and unique ways to figure out how the data access pattern is composed, but the SQL injection Attack and Defense book does better job of outlining them than I will attempt.


You are also unlikely to be able to reverse an algorithm used on the web, with perhaps the exception of various cryptographic ciphers or hashes.  But you ought to be familiar with various important algorithms, as lists and data retrieval and binding are things which come in handy in more advanced attacks.  You ought to know the differences between Linked Lists and Sets, for instance.  Most web applications just use generic or typed lists, however I’ve run into situations where understanding how the data was being cached (as a Set) made it possible to short cut the caching mechanism (which was important so I could generate the pages uniquely each time).

There are super formal algorithm books, but also good introduction ones.

AJAX Patterns

AJAX patterns are also very useful to be able to identify in the testing of a site.  OFTEN these represent great chances to bypass WAF or application level input filtering mechanisms.  There are basically only three approaches to this.  The first puts the processing of the display entirely in the hands of the client (and just sends raw JSON back to the AJAX call).  The second is that the entire component is returned, processed by the server.  This approach was favored for a while in ASP.NET Ajax’s mechanisms.  The final is a hybrid where parts are processed server side, parts are processed local client.

How much you will be able to manipulate these features later will depend largely on how they are composed.


Patterns are ubiquitous and unavoidable.  They range from the super formal to something more commonly known as spaghetti.  This mess, (their mess) is one of the first things you are going to be unpacking as you work through a site.  Applications might be a mix of one or more of these patterns, as each component they might implement could leverage a different pattern for it’s development.

Understanding architectural composition is my ground zero of a test– a scoping step if you will.  It is a lot of information to grok, but once you can it only takes a few minutes to figure out.  The best way to get experience with this is to build sites with these various approaches.

But I reiterate, this skill dictates where the entire rest of the test goes.  I believe that composition is destiny– at the very least it’s a predisposition.  Each pattern has strengths and weaknesses, which you can only take advantage of if you have the chops to first recognize them.  MVC for instance suffers from model binding (aka mass assignment) attacks, whereas front controllers might have command injection / authorization issues.  I stack the deck as much as I can here and try to know more about architecture than the developers themselves.

If I were going to train a person on web application testing in general, enterprise and design patterns would be where I spent nearly all my time for a while.  More on design patterns next time.

Reversing Web Apps: The Caveats

Because our process if reversing is not a direct 1:1 mapping to compiled reversing, we have to clarify a bit on how we can be successful.  Although some frameworks generate HTML based on the underlying code, HTML cannot always be reversed to a state of source.  People do weird stuff.  So we must additionally rely on application behaviors and concepts found in forensics and social engineering.

The primary basis of our reversing approach is on Locard’s exchange principle.

Wherever he steps, whatever he touches, whatever he leaves, even unconsciously, will serve as a silent witness against him. Not only his fingerprints or his footprints, but his hair, the fibers from his clothes, the glass he breaks, the tool mark he leaves, the paint he scratches, the blood or semen he deposits or collects. All of these and more, bear mute witness against him.

Locard was a smart dude.  You can’t do things in life with out leaving some evidence behind into the how and why something took place.  Even the attempt to “clean” a crime scene leaves evidence that the crime scene itself was cleaned.  This holds especially true when building applications*.  Since information leaking isn’t in the OWASP top 10 list, most applications are like bilboards which scream how they were built.  Furthermore, how an application responds or behaves against data is also just another way to identify what it’s composed of.

As a very easy example, lets look at a typical ASP.NET WebForms based application.

The first bit of evidence are the file extensions, .NET applications typically use .aspx, ashx, and .asax.  This immediately focuses you on either an ASP.NET MVC application or a WebForms one.  To identify which was used, we can use unique features of WebForms such as ViewState or EventValidation.  These don’t generally exist outside of this WebForms, because ASP.NET MVC pages are not event driven and are supposedly RESTful.  These framework features are obvious and easy to look for (read: grep & view-source).  Because ASP.NET WebForms is event driven, it likes to mangle names of objects in order to make sure that you don’t have naming collisions.  As a result, if you had a ASP.NET Panel control which contained an ASP.NET TextBox control in it, you’d have a HTML rendering which looked very similar to:

<div id="Panel_NamedPanel">
<input name="ctl100$Panel_NamedPanel_TextBox1" type="text" value="oh hai" /> 

This special naming convention suggests not only the framework, but even the version (as previous versions use a different convention).  IIS also tends to tell you what framework version, and there are default ASP.NET folders you can test for to see if they exist.  A “Views” folder will exist for MVC .NET apps, and is unlikely to exist for a WebForms one.  Failing all that, look at the career page and see what they want new developers to know. 🙂

Like I said, lots and lots and lots of evidence.

By just having the application framework identified, you have reduced your working set significantly**. If you suspect that the site you were looking at was built on a content management system, you could use the google to search for any “unique” named fields or pages to see if any results come up which might help you identify the framework.  I use technique this often.

Secondly, because our process is based on feedback cycles– how we interact with the site is of importance.

Although some people use the terms active & passive testing, I find them misleading.  You are nearly always actively testing the site, though sometimes in less obvious ways.  I prefer the terms, elicitation and interrogation.  In elicitation, you are strategically asking the application a series of questions which are reasonably acceptable in normal use.  This is done not to set off triggers (ids) and end the conversation, but also because sometimes it’s the best way to get information.  Interrogation, on the other hand, is often far more aggressive and very obvious it’s being done***.  To compare and contrast, I might elicit details about an encoding scheme used on a web application with a creative user/details such as:

Name = John "the duke" O'Reilly
Street = 123 Some Street #123 (near 4th & Thomas)
City = Phoenix/Ahwatuke

This user could very reasonably exist, and concurrently tests different reserved characters to see how they are handled.  This name is unique enough that it makes it easier to later grep for in results to see where it’s used throughout an application.  It also is unlikely to ever be in someone’s WAF.  So I have an incredibly strong chance of not being bothered by one if it exists.  If I was testing this in a more interrogative sort of way, I might just spam the fields with a list of xss attacks like:


Conversely, these payloads MIGHT be in a WAF and could be blocked, despite the field being vulnerable.  Neither approach is “better” than the other, they are just used in different places for different reasons.  The trick is, of course, to know when to use which and what might cause deviations in your ability to understand the response.  For instance, just like in interrogation sessions, applications tend to shut down if you are too aggressive.  Or if you are too obvious with your questions, a WAF might block keywords and become (in a theoretical sense) aware of your deceptions.  People aren’t really named Bobby DropTables.

But just to be complete– it wouldn’t matter so much if they did block it.  The sheer fact that it’s blocked implicates some type of countermeasure, either a WAF or application filter.  You can distinguish between the two with forensics.  WafWoof (or Waffit) is an example of a tool which attempts to figure out what WAF is being used by testing various encodings that WAFs use in general.  If it’s an application filter, they are sometimes implemented as plugins and you can try to force browse to see if they exist.  If those fail, you can look for gaps where an application filter might not be applied.  In ASP.NET WebForms, for instance, some controls don’t encode output data by default.  Sometimes you can bypass an application filter with an attack against an AJAX type service– a WAF might still filter data, where often application filters don’t.  You could try comparison measurements against pages with known and made up parameters to see how they are handled. It goes on and on and on.

You can’t stop the signal.

Our final basis is that application behaviors can assert it’s relationships, entities and types.

This concept will be discussed and demonstrated at great length as we get into decomposition.  It’s worth noting, for now, that this approach is used when testing malware somewhat frequently.  Allowing the malware to affect/infect controlled systems, lets the reverser discern not only what it does, but what things it might then be built of.  In order to do X, an app might be composed YZ.  This basis provides useful evidence for asking intelligent questions later on.

Le Finale

The engineering process is one of pragmatism.  Applications aren’t built in total isolation.  They use frameworks to develop with, and reuse code (patterns & algorithms) to solve problems.  They also aren’t generally aware of how obvious that is, which makes it VERY easy to gain visibility into what they’ve done.  Despite not being a 1:1 relationship to compiled reversing, we can be very successful in figuring out how an application is built.

If a website boldly declares it’s written in ASP.NET WebForms you should have open the MSDN articles speaking to what might be there.  If a website further boasts of being built on top of DotNetNuke, you should download the source and have a local copy you can use to help navigate the site you’re looking at.  It is always in your best interest to download the framework locally and use it as a frame for your test.

Every bit of evidence can and should be used against them.


* Some apps would be best served if developers tried to cover up that they wrote it, I’ve seen many a travesty in my time.
** Reducing your working set is a way to digest information with out overwhelming yourself.  It’s usually a good idea– so long as you don’t mistakenly remove things that are needed from the working set.
*** Interrogation techniques are wide ranging, so perhaps my term isn’t as accurate as I’d like either.  But, because interrogation is fairly obvious when it’s happening I think it works for now.

Reverse Engineering Web Applications: The Series

There is only so much you can share in a talk, and so I’ve decided to turn a short 50 minutes into a rightfully lengthy series.  I know this post is long, but I kindly ask you bear with me.  We will revisit the topics discussed in this post repeatedly throughout our series– so it’s best to establish some basis and familiarity with them now.

Indeed, it makes little sense to jump into the technical meat and potatoes without first defining the words, processes, and concepts to evaluate the work ahead of us.  This post, after all, serves as our guide to establish goals and valid measurements on if we are successful or not.

Reverse Engineering

In the compiled binary world, reverse engineering is the taking of an application (executable or dll) and leveraging a combination of compiler theory and assembly to arrive at a reasonable representation of its original source code.  And though, while this definition is accurate, it’s somewhat mechanical and not exactly very revealing.  We are going to define reverse engineering as:

The art of deducing an application’s elements, composition, behaviors, and relationships.

This definition is more functional as it establishes the goals of our process.  Why someone might reverse engineer an application is variable of intent, and to a degree irrelevant to the conceptual goals.  That is, people reverse engineer for a variety of reasons including, interoperability, general education, and security testing.  Although each of these reasons dictate unique attentions and focuses, our conceptual goals still stand.  In our case, “why” we reverse engineering applications is predicated around the belief that security is a visibility problem.

In the ideal world, every engagement would grant me source code access and a copy of the application/environment*.  Having 100% visibility into the static and dynamic environment of an application is incredibly powerful.  By its nature, it eliminates the need for guessing and will make attacks significantly more informed and reliable.  Simply put, a better job can be done because this is a position of advantage.  It serves to reason then, that in all situations less than the ideal, we must reverse engineer to get into that position.

The Process of Information Gathering

Now, if you’ve been around the block, you might note that few (if any) in the appsec industry use this lingo.  In its stead, you will hear about information gathering and in some cases even analysis.  The Web Application Hackers Handbook (WAHH) uses this combined definition as the entry point to any web security test.  While I believe the track they are on is correct in a sense, I’d dare suggest the picture painted is inaccurate.

Traditional information gathering, as defined by OWASP, WAHH, and many others, is ubiquitously listed first step in the hierarchy of checklist style web-testing.  The laundry list of tasks it outlines include:

  1. mapping visible content
  2. identifying non-visible content (forced browsing, search engine discovery)
  3. testing for debug parameters
  4. identifying data entry points
  5. identifying the technologies used in the application
  6. mapping the actual attack surface
  7. analyzing exceptions

These tasks are further broken down into numerous sub-tasks and subtle implications, such as testing with various browsers, extracting a list of every parameter used in the site, gathering comments, and so on and so on.  Though perhaps not as apparent up front, these tasks create a huge amount of upfront work if you were to follow them in the literal definition.

Which is why basically no one tests like that.

Lets presume for a moment that you had the time to do all this.  This is not a simple presumption, mind you, as the time this takes is exponential to size of the application.  But lets say you did.  Related to the tasks and goals we defined with reverse engineering what have you learned?  You’ve collected a series of facts about the application, but you are realistically in no better a situation than when you started.  Gathering a list of every parameter in a site doesn’t make you more situated to test any of them in a relevant way.  You still lack context and understanding (more so if you used automation to achieve this).

And did I mention it’s slow?  As the surface area of an application grows, especially its dynamic surface area, so does the amount of information possible that you can collect.  Take just one aspect of the discovering of non-visible content, for instance.  While there are only a few approaches, most commonly this is performed with tools such as dir-buster and burp’s discovery tools**.  These tools throw mutations and variants at the site based on content previously learned about.  This approach sounds good, but grows very very quickly and in most cases never actually finishes on anything except the most trivial of sites.

So.  It is fair to say that some types of information collected (and some collection methods) are far more valuable than others.  In most actual instances of reverse engineering (not for web), it’s rare one would try to collect “everything” about an application.  More common would be to understand and evaluate a specific item of concern or to unpack a behavior.  The task of information gathering and analysis (in our case reversing) is only valuable in its ability to drive us forward towards a goal.  We do not collect information for information’s sake.

Instead, when asked to describe their methodology, the most common answer I hear*** is somewhat nebulous and uncomfortable.  It’s often described as this:

I use the application/system like a normal user would, and follow leads and play with interesting things as they come up.  I keep going until I feel as though I’ve hit all the important stuff.

Yikes.  Admittedly, that not a very well-defined process– and surely not something that can be taught to others in its current state.  Luckily, however, there are indeed ways to unpack the gems hidden in that statement.

The Art of Reverse Engineering Web Applications

The first thing to note of the aforementioned description is that the process is best understood as iterative, not hierarchical.  The application is revisited, over and over, and as new information is discovered absorbed into our understanding and acted against when it’s determined to be valuable.  The deciding of what to test is both natural and dynamic.  In contrast, hierarchical testing implies you gather a bunch of details once and move on.  Waterfall, as a development methodology, has fallen out of favor to build software– so why would we test it that?

For me, understanding this iterative process hit home when I studied a bit on John Boyd. Boyd was a modern military strategist in the Air force who is perhaps best known for his work with maneuverability warfare and OODA (Observe, Orient, Decide, Act) loop.  The OODA loop provides interesting insight into our natural method of processing information and deciding/acting against it.  It proposes that we take in information and evaluate it for its worth based on our history, emotion, bias, and cultural experiences.  Once evaluated, we decide what we do with it and act against it.  This loop is constant, generally subconscious, and usually very quick (some loops are longer than others).  You may not make a decision to operate this way, but I believe you do regardless.

The implication in this type of testing (and further hinted at in our description) is that the tester must rely heavily on their ability to see patterns and deviations of those patterns.  This places a premium on having exposure to a wide variety of patterns and practices, such that they can be observed and oriented to appropriately.  While, it’s theoretically possible the first time you test MVC pattern based sites you’d discern its inner working and details, it is unlikely.  It is about as likely as writing a masterfully composed song the first time you learn to play the guitar.  Possible, but not likely.  As such we will spend considerable time discussing patterns for web applications in later posts.

Finally, the aforementioned process definition forces us to face the most common rebuke I get when sharing this approach– “how do you know when you are done?”, which is usually coupled with an expression of desire to be thorough and to ensure the client gets the best test.  This question is a good one to ask, and not one asked enough.

To answer that question we have to strip away the illusion that any model could perfectly satiate the fear of being incomplete.  They all are.  We have no evidence to suggest that we are capable of finding and squashing all bugs (let alone security bugs) even when an application is put under numerous spot lights.  The hierarchical model, in my opinion, exists exactly because of this fear– people like bookends.  It fulfills a desire for a concrete beginning and ending to the test, but in exchange it steals away creativity and relationship between the tester and what an application has to say.  Its like two people dancing next to each other, not with each other.

Instead– testing in a fashion reliant on past experiences, asking questions, and listening to the application is perhaps the best way to provide a thorough test.  It allows the tester to deal with what IS going on with a site, while not trying to fit the site into a specific mold.

To be clear, though, the nebulous definition of methodology still sucks.  I am not suggesting testing an application in a totally undirected fashion.  I am merely pointing out that actual conversation with the application has a much greater potential to drive us deeper into the heart of what is going on.

Since the aforementioned definition is indeed nebulous, the approach we will review and work with is somewhere in between.  It is clearly less formal than a hierarchical approach, yet more formal than “I just test the app.”  It is a focused and iterative process in which each piece of the test drives us forward and continues to reveal even more of the puzzle.  It is both active and passive**** as, in many cases, we can shortcut the guess-work through functional exploits to gain a deep visibility into the application’s composition.

Oh.  So, how do I know when a test is over?  When I say its over.  Being a professional, reliant my past experiences and education puts me in a position to say that.  Relying on someone else’s checklist does not.  The rest of the series will revolve around unpacking what this all means.  This process is neither comfortable, traditional, or yet complete.  But for me, it’s made all the difference so far.

On a personal note, I’ve tried the hierarchical approach to testing applications through studying and following a wide variety of methodologies.  In each case, I can say that inevitably I am left with a feeling of, quite frankly, boredom.  Every test becomes the same, and the job becomes monotonous.  I have embraced the approach I am outlining in this series because I’ve found that testing is a relationship.  Applications are very honest, and if you can learn to ask intelligent questions, and how to listen to what they say, they will tell you a great deal about themselves.


* I also want access to the server if it’s hosted, a recreation of environment with total visibility.
** This is not a knock at these tools or their creators– only pointing out the shotgun approach bears limited fruit, especially compared to other more informed approaches.
*** My intent was not exactly scientific in this, as I did not send out a formal survey and such.  I did, however, talk to a fair bit of experienced and notable testers about this issue.
**** Active and passive testing are also very weak terms.  Lately I’ve been using the terms elicitation and interrogation.  I will get into more detail on that later.