The most important quality in cybersecurity is sincerity, even if you don’t really mean it. Declan O’Riordan MBCS looks at unintentional error and how dishonest criminals can use them for plausible deniability.

Auditors and executives often like to see a simplistic tick-box approach to security, and so do hackers. Checklists can be very useful, but they are also context-free and ignore many bespoke aspects of our IT systems. The reality gap between tick-boxes and unique code configurations presents opportunities for deliberate dishonesty and unintentional mistakes to go unnoticed.

Many enterprises engage a healthy number of staff working on security policy, governance, and compliance while only a small minority are looking at the details of the applications connecting the organisation to the customers. Policies, especially those backed up by civil and criminal law, play an important part in the equation employees use to balance their desired honesty against their potential for selfish dishonesty:

  1. How much do I stand to gain?
  2. What is the probability of being caught?
  3. What is the magnitude of punishment if caught?

Without contractual obligations and clear penalties for rule-breaking, many mostly honest staff might be tempted to engage in corrosively dishonest behaviour that becomes infectious when other colleagues are seen to be ‘getting away with it’.

But what if an employee can manipulate one or more of the equation factors? Let’s consider the malicious insider who creates applications that process valuable data. Here the return could be almost unlimited, although many of the 18 million developers in the world may be tempted by less than £10,000.

The malicious insider might covertly write code for a data input field that contains a backdoor granting administrator privileges to the live system, or use timing channels and steganography to hide data exfiltration. Their range of options spreads from turning data into code, abusing code formatting, abusing reflection, inversion of control, abusing validation, preventing data flow analysis, abusing callbacks, and ‘Trojaning’ the Java platform, container, libraries, and build server to name but a few.

Malicious code is not the same as vulnerabilities introduced by developers who weren’t trained in secure coding, it is sabotage that causes harm directly.

The risk of discovery for the malicious insider lies mainly in code inspection and scanning by tools. A reviewer’s eye may be drawn to unusual coding, but inspecting a financial organisation owning one billion lines of code is harder than creating a new season of Breaking Bad, especially if the attack is obfuscated.

Generic static (SAST) and dynamic (DAST) analysis security testing tools are faster than human reviewers yet have high false positive and false negative rates because they have no domain specific context to understand the business logic.

While these tools are not fooled by semantic obfuscation, they are limited in understanding what the code is supposed to do, particularly if the business rules are violated. If the malicious insider knows which code analysis tools the organisation is using, they can also run the tools themselves to make sure their flaws are not detected when deployed.

Impersonating unwitting insiders

Nevertheless, a cautious criminal might be deterred by the risk of analysis eventually discovering their code was fake. Now the sophisticated insider reduces the probability of punishment by going to the next level, impersonating the vast numbers of unwitting insiders who make regular security mistakes as they build applications.

So long as the IT paradigm allows designers, developers and testers to churn out accidentally vulnerable applications there is plenty of scope for criminals to shroud themselves with plausible deniability. This means the best attacks are ones that look like the typical vulnerabilities found in code all the time.

How many of us would be prepared to admit our knowledge of application security is weak during a job interview? How many interviewers know how to ask probing security questions? A small act of dishonesty rationalised by the intention to learn security on the job later can have far reaching consequences.

Oh dear, now we have two difficult problems. Malicious insiders avoiding detection among the mass of staff making unwitting mistakes, and the mass of staff making unwitting mistakes. To make the former more visible, we can try to deal with the latter by raising the tide of quality for all application builders.

Go beyond the top ten

There is huge scope for improvement in our QA processes. For example, the OWASP top ten web application vulnerabilities is a small incongruous threat library being treated as a dogmatic threat model almost universally. Quite unwittingly, many project development teams ignore realistic threats and become an inside risk through their own actions and inactions.

Systems are designed and developed without appropriate security controls because threat modelling is insufficiently considered. It follows that security testing is unlikely to validate the effectiveness of controls that were really needed if they were never identified in the first place.

The ‘Top Tens’ have been highly effective in making application security more understandable and OWASP is a superb resource, but the scope of security testing cannot be context-free. It’s time to move on from the top-ten mantra and consider treating the other threats to our users, tiered architectures, compiled applications, web servers, shared hosting, and application logic. Unfortunately, that is easier to say than do.

One great advance in the last year has been the arrival of interactive application security testing (IAST) for development and test environments, plus runtime application self-protection (RASP) tuned for production systems, to use the Gartner terminology. Ignoring the IAST tools that simply correlate the results of SAST and DAST scanners, the ‘true positive’ vulnerability and attack detection progress has been achieved through creating real-time security sensors within applications.

This approach is similar to the new generation of application performance monitoring (APM) tools, in that it uses the instrumentation API provided within frameworks such as the Java development kit from version 1.5 onwards.

Currently the IAST and RASP tools only work in application servers using Java, .NET, Node.js, and ColdFusion, but the pace of progress suggests that we can expect broader scope in the near future. The technology is not quite a silver bullet, although given a 92 per cent accuracy rating in OWASPs 2015 benchmark using 21,000 security tests cases, we might consider it a decent bronze bullet to be supported with targeted alternative techniques to fill the gaps.

At last we have continuous (as opposed to continual) security tools that find and report vulnerabilities and attacks in real time, and provide a dashboard that can be integrated with anomaly detection systems in the security operations centre (SOC).

‘Why does that developer keep deploying vulnerable code even though the IAST tool is instantly pointing out their mistakes? Do they need more training, or are they deliberately creating vulnerabilities?’ Getting to the answer may require asking tough questions, but thankfully we are starting to get the security vulnerability and attack detection tools that work at DevOps speed across entire enterprise portfolios to assist detection of error-prone innocents needing help and malicious insiders needing prosecution.

Since we consider dishonest developers who deliberately place attacks in code to be an insider risk, why not also deal with unprepared project teams who inadvertently build vulnerable systems to also be an insider risk?

Mitigating, transferring, eliminating and accepting risks may all be valid treatments, but ignoring risk associated directly and indirectly from the unwitting insider, is not a valid treatment. The struggle to improve security is not easy but nor is it hopeless, and your reading this far plays a part in gaining momentum for better QA through improved understanding of the tools and techniques available on both sides.