Security Testing Approaches

It seems to me there is a whole bunch of hubbub regarding penetration testing approaches.  The very name itself has, regretfully, become so very saturated that it doesn’t surprise me that everyone wants to argue about what it means.  The greater concern, at least for me, is that we have gone down the approach of trying to argue a singular point– instead of accepting that there are a variety of valid approaches.  In the context of testing people, there are a variety of valid measures to determine what they remember, understand or can do.  So it seems odd to me that instead of accepting that to be equally true for testing security– we try to come up with one standard that shall rule them all.  The truly disappointing part of this is when the standard itself acts as a point of exclusion, with the statement that if you aren’t doing it– you are doing it wrong.  I call bullshit.

In the world of education, there are a variety of ways to test a variety of things.  Methodologies such as norm-referenced, criterion-referenced, surveys, summative, and formative tests– all exist to gather a different specific sets of information that can be used to evaluate a person or group of people.  These testing approaches all create a different means of relative validity– that is, what might be a good way to ask a question in one test, is completely invalid in others.  The degree in which the test is in alignment with that which it intends to measure, the more it speaks to its validity.  It’s that simple (except for all the hard parts).

In the context of testing a computer system, my experience has been there are at least three different (and equally valid) approaches to testing.  I categorize these as intrusion, assessment, and compliance.

Intrusion testing is what most people in the offensive security world seem to mean when they talk of a pen-test.  The goal in this approach to testing is not so much about the security of a singular/particular item, but in the security of the overall system.  In an approach like this, I would be focusing a test around means to achieve a specific goal (data exfiltration)– by whatever means easiest to do that.  In tests like this, reducing scope of allowed target acts in opposition of the goal of the test itself.  Ideally, I would argue all things that a valid attacker might be able to access should be in scope.  I believe this type of testing to be very valuable in understanding awareness, visibility, and operational response to threats.

Assessment based testing is more like what one might naturally think of as a QA test.  I personally prefer this approach overall, as it feels more natural as an ex-developer and I believe it more thorough.  The goal of this test would be to understand deeper degrees of threats to a specific entity.  Scope is sometimes best served by reducing it to specific targets, so that the test itself might be more focused and specific.  Transparency in a test like this is super important, the more detail the tester is given, the better he/she can understand what’s wrong with the system and make a more detailed picture of potential threats against it.  This is effectively light-weight security research.

The final approach, and one which many InfoSec folk seem to hate, is that of compliance based testing.  The short version is that you take a set of standards and test the system itself against those standards.  Lots of people call this a check-list approach to testing, and while there are many concerns– the test itself is only as valid or invalid as the standard itself.  Lots of people would talk about this as PCI or PA-DSS, or various other bits of compliance– however this also has relationships to SDLC and OpenSAMM and other such standards.  If the standard itself is thorough, then the test itself must also thorough.  This also might be used to check compliance on an internal standard.  Doesn’t “have” to be an external entity– for that matter.

There might be more, but these three categories seem to at least help me be able to focus my efforts.  I will determine what to align my efforts toward based on what is set in the statement of work.  I think each of these tests have a purpose, and could all be placed under a penetration test– since in all cases you are technically “penetrating” something.  It might be easier if we just dropped the phrase “penetration test” and simply called them security tests, at least for the sake of clarity.  That won’t likely happen, but here’s for hoping.

Finally– just because I don’t test your preferred way doesn’t mean I am wrong.  I have my reasons, as do you, and I would hope that you can respect that as I respect yours.  As long as your results are accurate, in-line with customer expectations, and actionable– I think we can all agree that’s a step in the right direction.

Just some thoughts.


Post a comment or leave a trackback: Trackback URL.


  • Beckie Mossman  On March 6, 2011 at 9:44 pm

    I agree that not everyone should follow the same approach, that becomes formulaic and defies the intention of the testing.

    Where I get concerned is the cases where the bar is so low on “pen-tests” it doesn’t add value but meets most assessors requirements. I perform assessments where I write up a scan labeled as a pen-test almost weekly. I would argue that we should have minimum standard, except in some cases that’s all that will ever get done.

    The security test concept is great, it would help to break out the components needed in each engagement.

    • pinvoke  On March 6, 2011 at 10:25 pm

      Minimum standards seem like a good idea, but it’s hard in practice. For instance, I might have a specific type of assessment that needs to be accomplished and don’t want to run a full range of tests. To say that the test is invalid because it’s not “complete” under some arbitrary set of rules is not very accurate either.

      Not everyone would agree on what minimum acceptable security tests would accomplish either. As demonstrated by PCI– if we say this is a good baseline, everyone immediately gets their pants in a bunch over it. Accuracy of what is being tested (vs. what is stated tested) is the first part of the problem. Validity of the test, and comprehensiveness of it– are two different types of issues. Each should be viewed in context of the type of security test itself.

      • pinvoke  On March 6, 2011 at 10:29 pm

        As another point– consider SDLC from MS. While I can prove over a period of time that particular measures reduce the overall amount of security bugs in a code body– I CANNOT conclude that my application is secure. I can only conclude that based on the measures I set out with, and over a period of time, I have reduced the bug count. That’s it.

        If you wanted to know if it was sufficiently hardened, an assessment might be a good test to balance out your assumptions and measures with reality. Each test is valid under it’s context, each is appropriate for what it intends to test.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: