On 12 May 2017, at 07:44 UTC in Asia, a computer virus made its first attack on an open connection. Within 24 hours the code had infected more than 230,000 computers in 150 countries. This virus was Wannacry - a ransomware attack that used a flaw in the implementation of a common network protocol used for sharing files and printers.
CVSS, an open industry standard for assessing the severity of security vulnerabilities, reported over 14,728 registered vulnerabilities in 2017. In the first half of 2018 a little under seven thousand entries were registered, beating last year’s count for the same time period. As more software is being deployed, more devices are becoming connected, so consequently more vulnerabilities are created and reported.
Over the last 30 years, your computing infrastructure and applications have become increasingly connected. With the current mega trends of internet of things, autonomous systems and the continuing growth of the internet, securing your infrastructure from flaws in your software is a top priority.
The Open Web Application Security Project (OWASP) was founded in 2001 and produces freely available security information for web applications. The number one item on its list of Top 10 security threats is injection attacks.
An injection attack is when an attacker crafts a carefully written string and passes them to your website. This string is designed to either reveal confidential information or open up access to your systems. These threats are commonly the result of minimal or non-existent checking of user input within the program.
Steve McConnell, a well-cited author of Code Complete, quoted the industry average at 15-50 errors per thousand lines of code (kloc). Latest estimates suggest that Google has two billion lines of code under its control with a reported rate of five errors per kloc.
Coverity, a company specialising in static analysis of code, estimates that to get this low rate, each line costs Google $5. To put this in perspective, Microsoft applications have a reported 10-20 defects per kloc during testing and 0.5 errors per kloc in released products.
These numbers are estimates as there is very little official literature in this area with confirmed figures. Even with low error rates such as these, the size of the code bases large multinationals generate mean that this is a serious concern. Typically, a security defect in Android or Microsoft Word is not likely to put lives in danger, however NASA has rates of 0.004 kloc that is reported to cost $850 per line of code.
From these figures we can conclude that good quality code costs money and there is a trade-off to be made between security and cost.
What do we mean by secure software development?
Let us now step back and consider, what are developers trying to achieve when we are discussing secure programming. There are three primary areas of concern. The first is confidentiality; the property of ensuring that data and applications are only accessed by authorised parties. Common attacks for breaking confidentiality are injection attacks and cross-site scripting.
Integrity is the second property, which is defined as the modification or deletion of data by only authorised parties. If an attacker can gain access to your systems, then the integrity of the system may be compromised.
The impact they can cause should be protected by limiting their ability to escalate to that of privileged user. Broken authentication systems are a common vulnerability exploited by attacks of this nature.
Finally, availability is our last key area of concern. Availability is the timely provision of service. A denial of access attack is a common threat resulting in this type of security issue. This can be caused either externally by huge increases in network traffic or internally through malicious injection attacks.
No programming language is secure, but many do now come with features that can limit the likelihood of common forms of attacks. C# has the concept of managed code, limiting the potential for buffer overflows. Ada, often used for military scenarios, comes with additional compile time checks and places constraints on what the programmer can do. Most languages, however, leave it up to the programmer to ensure they are using best practice and the latest versions of libraries.
Programming in a secure manner is an incredibly difficult and challenging area, and one which is a constantly moving target for developers. There is an asymmetry between the programmer and any future attacker, in that the attacker only needs to find one defect to exploit whereas developers need to defend against all known and future attacks.
Is Open Source more secure?
Security software can be divided into two philosophies; open source software (OSS) and proprietary software. The former allows anyone to review the source code for the application, while the latter treats the source code as a trade secret.
One of the core benefits of open source software that is often cited is that it is more secure, as the code is open for anyone to review and fix flaws in the software. Underlying this statement is the assumption that there are people who actually review the code. This may be true for the larger, more popular applications. However, if your product contains code from smaller niche libraries, it is unlikely that this would have had the same level of scrutiny.
For proprietary software, the source code is hidden away as the intellectual property of a company. This is also referred to as ‘security through obscurity’. It would seem to follow that if fewer people can see the source code, there is less chance of an attacker finding an exploitable flow.
However, this is not the case as attackers can use reverse-engineering to decompile the binaries and use specialist tools to look for likely defects. This is an on-going arms race between vendor and attacker.
On closer examination, the situation can become blurred. A number of historic flaws in proprietary software were found due to defects identified in open source software. This would suggest that the security of both philosophies is intertwined. Most software uses similar patterns of design and common frameworks, therefore both parties have an interest in identifying flaws and working together to resolve defects.
There has been little academic research in this area as quantifiable data is limited, it is, therefore, unclear that either approach is measurably more secure than the other. There is nothing inherently more insecure about either approach as long as potential threats are dealt with and users are alerted to update their installations.
How can you make your software safer for your clients?
Like many areas of cybersecurity, security comes from using best practice at each stage. As a vendor, you should ensure your software goes out with a secure configuration. Clients should not be required to make configuration changes to make your product secure out of the box.
CERT, the security division based at Carnegie Mellon University, offers recommendations for software developers to follow. Its primary recommendation is filtering any inputs coming into your system. White listing acceptable character inputs is better than black listing as it protects against future unforeseen exploits.
Developers should limit the amount of information returned in error messages that could give away the software versions being used, or hint at patterns in user data. Many applications use third-party libraries, and these need to be kept up-to-date as part of any distribution.
There any many tools now available for a range of programming languages that can help identify insecure development patterns. These include static analysers that analyse the source code, detecting buffer-overflow issues and can prompt on known insecure functions and libraries. In addition, dynamic analysers can be used to watch your code run and identify potential security concerns such as passwords not being erased from memory or memory leaks.
As a system administrator, check that you are following the vendor’s best practice. Where possible it is recommended that you deny all external connectivity and only allow what is specifically required for the application to work. Keep your systems patched and respond quickly to any notifications sent by software vendors alerting you to vulnerabilities.
If your company wants to go further, you can recommend penetration testing; employing white-hat hackers who will try and gain unauthorised access to your infrastructure for a given fee.
If you have older software then it is recommended that you partition this away from the wider infrastructure, that way should a breach happen any intruder or infection will be contained.
Be brave, but never take chances
Security will undoubtedly remain a high-profile issue for developers for the foreseeable future. However, through education, better development tools and improved deployment practices the likelihood of a successful attack can be minimised.
Dr Tom McCallum is the Academic Lead Developer at the University of Highlands and Islands, located at Moray College. Tom has 11 years’ experience in both FinTech and corporate IT environments developing and maintaining a range of software and IT infrastructure.