It is estimated that the annual cost of security incidents that result from software application vulnerabilities is around $180 Billion a year. It’s clearly expensive to retrofit security after software or an application is delivered - in fact IBM estimate that is it 30 to 100 times more expensive to retrofit functionality compared with building it in during the development lifecycle.
Even though software continues to be deployed with flaws, businesses are starting to demand more assurances in the quality of the software they are buying and using. As a result, new practices are being implemented, and there has been a real focus by the software community on reducing software coding errors.
While this is a hugely positive development, unfortunately, not all of the breaches we see are caused by insecure code, which will render the efforts insufficient. Certainly, buffer overflows and breaches caused by bad input may be caused by insecure coding but quite often, the breaches are caused by bad design, inadequate requirements definition, bad operation or bad maintenance processes. Security is not yet well enough understood by the development community.
Specifying security throughout the software development lifecycle
In any development environment, be it software, building a house, making a car or anything else you might like to think of, there is always a trade-off between functionality, timescale and cost.
You can’t really change one of these without it affecting one or more of the others. If you want more functionality, it will probably cost you more and take longer. If you want to reduce the cost, you can’t usually do it without reducing the functionality. Given that IT projects have a track record of over-running and over-spending, this problem is exacerbated.
Furthermore, it is generally the case that security requirements are not specified. It’s not surprising then that they are not delivered. If they are specified, when projects go through re-scoping processes, security requirements and functionality are soft targets for dropping.
It’s not just about requirements, though. Usually, no testing is undertaken of the security requirements and functionality. Testing usually concentrates on whether the software or application delivers what it should deliver when it is operated correctly. Little time is spent testing situations where the system is not operated as it should be.
Probably the best example of poor design causing problems is the way TCP makes a connection. A TCP connection is very much like a telephone call. We have to follow a protocol. You make a phone call, the person at the other end answers and you make a response. TCP is exactly like that. In order to establish a connection, you have to send a SYN packet.
The server should then send you a SYN-ACK packet, and you send an ACK packet back, and the connection is then open to receive traffic. In the well-known denial of service attack that was first used around 1996, the attacker sent a number of SYN packets but never acknowledged the SYN-ACK that was returned.
The result, rather like if you make a phone call and when the caller answers, you don’t say anything, is that the person you have called hangs on the line for a while before hanging up. This was exactly what happens in a Denial of Service attack.
The attacker floods the server with SYN packets, leaving the server with so many half-opened connections that genuine users cannot use the service. This is not a coding problem. It is a design problem.
What should we do about it?
Since software, like anything else that goes through a manufacturing process, is designed and developed to a blueprint, it is of paramount importance that security requirements are determined alongside the functional and business requirements.
A preliminary risk assessment at this stage serves to determine the core security necessities of the software, while a security plan should be generated as part of the design phase of the project to be revisited and adjusted as the development progresses. In fact, we have to consider security at every point in the system’s development life cycle:
1. Requirements
At the requirements stage, we should be considering what the security requirements are. Are there any specific business security requirements that need to be built in? Is the data being processed particularly sensitive? Are there any specific requirements that have to be met? For example, is the system processing credit card information? In this case, the Payment Cards Industry Digital Security Standards have to be incorporated into the requirements.
2. Design
From an architectural design viewpoint, consideration must be given to how the security requirements can be designed into the system. Design should also consider how the system might be misused or what could go wrong, a perspective missed by developers that instinctively think about how to build rather than break things. We should also think about what are the main threats to the design of the system - SQL injection is a typical example of a threat to the design of the system.
3. Coding
Coding is important, and we must make sure the coding is robust and secure. There are many tools that can be used to check coding, and code inspection can also be used in the testing phase. The approach to coding should be to trust nothing and to be able to process anything:
- don’t rely on the length of the input;
- don’t rely on the content of the input;
- input should be obtained a character at a time rather than a buffer at a time. The input should be validated as well as parsed.
4. Testing and deployment
Testing should check the security functionality along with the other functionality, ensuring that the system is resilient to attack. And of course testing should look for incorrect system operations, as well as correct ones. Secure deployment ensures that the software is functionally operational and secure at the same time. It means that software is deployed with defence-in-depth and that the attack surface area is not increased by improper release, change, or configuration management.
Software that works without any issues in development and test environments when deployed into a more hardened production environment often experiences hiccoughs. Post-mortem analyses in a majority of these cases reveal that the development and test environments do not simulate the production environment. Changes therefore made to the production environment should be retrofitted to the development and test environments through proper change management processes.
5. Operations and maintenance.
It doesn’t matter how well the system has been designed coded and tested, if it is operated insecurely, then all of that effort has been wasted. Maintenance presents two main problems. Firstly, any changes should be designed, coded and tested with the same rigour as the initial implementation.
Secondly, it is important to have a good change management and source management system. Too many upgrades are released with errors that were corrected in previous versions resurfacing in the new version. This is because the corrections were not retrofitted back into the previous development environment or because flawed source code was deployed within a broader program rather than a corrected version.
6. Disposal
Don’t forget the disposal of the system. We have seen losses to reputation where data has been left on hard drives when systems have been replaced or updated. Make sure what has been left behind is considered both from a software and a hardware viewpoint.
Sharing best practice on secure software development
There are a lot of software development languages and different development methodologies. I am sometimes asked whether we should develop a set of rules for one methodology and then adapt them for another or a set of coding standards for Java, another set for C++ and yet another set for .Net.
While this might be an approach at the code-specific level, there is a much broader need to recognise the principles of secure application and software development that apply across different development methodologies and different programming languages. This is no different from what we do with IT policies and standards. The top levels are usually control objectives or principles that are implemented on different platforms or situations.
The good news is that we don’t have to go out and develop these principles. At (ISC)2, we are working with people already involved in secure development to share experiences and develop a common body of knowledge that will provide the development community with guiding principles on how to develop software securely. It is up to the IT security teams to ensure they become better known to all the stakeholders involved in development.
Developing a security mindset
Although it is a very important and critical step in the software development lifecycle, secure code writing is only one of the various steps necessary to ensure security in software. It is time that CIOs as well as software vendors recognised that security is a process that needs to be woven into every stage of software development.