Stephen Bishop of IDsec looks at getting value from penetration testing.

Asking a third party to carry out penetration testing - in other words, an audit of a company's network security defences - has become relatively commonplace for many businesses, not just within large corporations, but an increasing number of mid-sized organisations too.

Yet all too often, the people paying for penetration testing fail to obtain the maximum value out of their investment, simply because they do not have sufficient understanding of what to expect.

This is not surprising, when considering that most organisations commission penetration tests just once or twice a year and so it is not part of their day-to-day range of experience. However, IT management are no longer just taking what penetration testers recommend on faith.

The key, therefore, is to make penetration testing a more accessible and understandable topic, rather than allowing it to be shrouded in the mystery of its early days, when network managers were more prepared to trust that the penetration test provider knew what it was doing.

The problem is that penetration testing is a very broad area of practice, with the approach taken varying considerably between different practitioners.

Not only do their methods of penetration testing vary, their styles of reporting on the results can be very different too, both in terms of the format and the information each report contains. This can make it hard for the recipient to interpret the results.

Understanding definitions

When commissioning a penetration test, one of the first steps is to make sure that everyone is using the same set of definitions.

For example, there are significant differences between penetration testing in its purest, strictest sense - where the aim is to break into the organisation by any means possible - and a security audit, which tends to be a wider assessment of risk, based on a variety of evidence.

Both methods have considerable value, but they are separate entities and should not be confused, if for no other reason than the reports they generate will be different. Failure to appreciate this can lead to the purchaser of a penetration test or audit being disappointed.

It is also important to realise that not all penetration test providers are created equal. Amazingly, there is not always a clearly reasoned link between the results obtained and the recommendations given.

Furthermore, some providers use entirely automated processes, whereas experience shows that the best penetration tests tend to be those that provide a combination of automated and manual actions, largely because human beings can interpret the potential meaning of data patterns more intuitively than a machine ever can.

Reporting best practices

The text-book approach to penetration test reports is really the only viable option. The report should have a summary, a clear separation between descriptions of the context, and a presentation of the findings.

The summary should be a self-contained piece that can be used as a standalone document in its own right, providing everyone involved with an overview, without it being over-simplified into management-speak, because this is, after all, a naturally technical topic.

The next aspect to look at is setting expectations and this concerns the context element of the report, with the clear understanding that no penetration test can cover every aspect of security and the boundaries between supporting or third-party systems are often blurred.

The context part of the report should make clear what the purpose of the penetration test is, including caveats, a short description of the system and a brief assessment of the threats.

It should explain the scope of the penetration test, including any relevant policy documents, a list of targets and testing dates, and any agreed or unavoidable restrictions. The context text should also list the kinds of tests being carried out and what they are designed to address.

Details of the tools being used should be included, because in most cases, there will be a number of these, not just the conventional running tools with which many people will be familiar. And finally, there should be information about the personnel involved. All this helps to set the background and to make the next step – the actual findings – easier to understand.

Presenting the evidence

Presenting the evidence can be tricky, ensuring that the overall findings are easy to understand without ending up having lists and lists of impenetrable data.

So, before getting into the details of the test results, the report should describe any general findings, including a list of the systems and services found, any functional anomalies and performance issues and any aspects that relate to third party systems outside the security system's domain, but perhaps only visible to the latter.

Next, the report should itemise each security issue one at a time. It may sound easy, but this is where the heart of 'best practice' penetration test reports lies, because quite a bit of skill is needed to tease out the basic issues from the initial results.

The practitioner should avoid reporting the same underlying flaw more than once under different names. Then, to provide real value, consequential vulnerabilities should be consolidated into one top-level issue.

One of the most difficult aspects to address is to determine the severity of each security issue.

After all, while the practitioner may know more about the security of the network than the in-house staff, the practitioner is unlikely to have a detailed understanding of the organisation's business, and therefore, the potential business impact of each flaw.

However, by clearly identifying the vulnerability and possible fixes, the practitioner can give the client as much help as possible to put it into context.

Each security issue should be clearly reported in a consistent manner:

  • Issue – a clear description of the vulnerability using, if possible, a recognized name.
  • Severity – an initial assessment by the practitioner in terms of 'must fix', 'should fix', or 'could fix'.
  • Evidence – a statement of the tester's reasons for believing that the vulnerability exists, with an appropriate level of detail.
  • References – pointers to relevant external information sources.
  • Actions – recommendations for resolving the issue in the current context.

Making it clear and believable

The penetration test provider should make sure that any claims are backed by very firm, clear evidence. This does not mean that the report should be pages full of network statistics, but the tester needs to show the reasoning behind each claim.

At the very least, this usually means demonstrating that the vulnerability is repeatable, so that it will show up on future tests and therefore, the effectiveness of any remedial work can be measured.

It is also important to bear in mind that the report may go to a far wider audience than the person who commissioned the test. Many of these people, such as the general IT department, may be hostile to the report.

It is in the interests of the person who commissioned the test and the company that carried out it to make sure that the case is easily understandable, with arguments and conclusions clearly explained and backed up.

Hopefully, this provides some guidance for ensuring that the penetration test is performed and reported to a high level. One final suggestion: ask prospective suppliers of penetration tests for a few sample reports from previous projects.

A report that fails to demonstrate clear thinking begs the question of whether the supplier is up to the job.

In a nutshell

  • IT managers are no longer just taking what penetration testers recommend on faith.
  • When commissioning a penetration test, one of the first steps is to make sure that everyone is using the same set of definitions.
  • A report should have a summary, a clear separation between descriptions of the context, and a presentation of the findings.
  • To provide real value, consequential vulnerabilities should be consolidated into one top-level issue.
  • The tester needs to show the reasoning behind each claim.