To protect the most vulnerable, security education and awareness campaigns need also to target leaders, managers, architects, operators, administrators and developers. Oscar O'Connor FBCS, presents his views and recommendations.
‘But let’s take a step back from the culture of blaming the much-maligned end user and take a look in the mirror.’
‘The industry can do better, by learning the lessons of the past and bringing security and privacy by design into the core of the systems lifecycle, from concept to retirement.’
When a bridge collapses, does anyone blame the drivers who used the bridge? Are the occupants of a building held responsible should it fall down? Vehicles have to meet rigorous standards of safety and security through the manufacturing process in order to be sold. Crash tests are an accepted cost of doing business. Seat belts, air bags and crumple zones, once optional are now the norm. Pharmaceuticals cannot be sold without extensive safety trials to ensure side effects are known and manageable, are non-lethal and that the product has the desired positive effects on the patient.
Information systems, like economic and most non-extreme political theories, have no legal barriers to overcome before they can be marketed, no requirement to meet any safety or security standards. It is yet to be tested in law whether manufacturers and vendors are forced to accept direct financial responsibility for adverse impacts on their customers when their solutions fail and either cause or facilitate security breaches. Time will tell whether contractual clauses denying liability will withstand that test. Whatever happens, recent stories of supply chain breaches causing issues for customers can hardly be blamed on end users.
In the third decade of the 21st century, with the global economy and all of its constituent local economies almost entirely dependent upon information and communications technology, this is surely unacceptable.
Everyone involved in purchasing, designing, building, installing, commissioning, supporting and maintaining information systems is aware that there are numerous security risks and threats inherent in both the process and the product. If they are not, they are in the wrong profession.
Ignorance is no defence in law and is equally applicable in information and communications technology (ICT). Yet, the number and scale of cyberattacks and other forms of information security breach are growing year on year. It would be very easy to blame weak passwords and falling for phishing scams as the primary cause of this continual growth, and many purveyors of magic bullet solutions fall into this trap.
But, let’s take a step back from the culture of blaming the much-maligned end user and take a look in the mirror.
Our user communities have jobs to do, they specialise in those jobs, not in the security of software or systems. Basic awareness of security threats and risks is highly advisable, but hoping for any depth of knowledge or interest is naïve. Like car drivers, people taking medication, and building occupants the world over, our user communities have the right to expect that the software and systems that they are compelled to use are as safe and secure as we, the creators, can make them. Consider that for a moment. The ‘right to expect’ bit.
Not the right to hope.
A reasonable right to expect?
Commercial information systems have been around for several decades now, and while it may have been reasonable once to assume that only authorised users would have access, this is no longer the case. So far so obvious. Was it ever reasonable to assume that authorised users would never make a mistake? Or that no such user would ever be either compromised or sufficiently disgruntled to abuse their access rights? Or that activists, criminals and nations’ intelligence agencies would use whatever means at their disposal to prosecute their objectives? Probably not. Yet when reading about the underlying flaws in information systems of all kinds that have allowed security breaches to occur, it is hard to avoid the conclusion that even the most basic security measures are still being overlooked when designing, building and preparing information systems for use.
Why, for example, is it possible to create a password such as ‘123456’, ‘password’ or ‘qwerty’? Because someone made a decision to not enforce a minimum standard of password complexity. And someone else made a decision that passwords alone would be sufficient in authenticating a user’s identity. These two decisions open the system to compromise as clearly as leaving the warehouse door wide open and unguarded. Is this kind of decision-making acceptable? It is easy to blame the user for choosing 123456 as their password, but is it acceptable to absolve the people who allowed that easy option from all responsibility?
A plea to the IT industry
Anyone who has worked in this industry for any significant period will have seen and been involved in their fair share of the good, the bad, and the ugly when it comes to projects delivering against expectations. The statistics have improved since the 1980’s when the biggest spender on software in the world (US Department of Defense) used 4% of the software they commissioned as it was delivered and 50% of their spend was completely wasted. As an industry, acceptance of disciplines like structured project management, quality management, risk management and so on, have taken time and the results have been reported on far and wide.
Be part of something bigger, join the Chartered Institute for IT.
The discipline of information security, relatively speaking, is still young, but it would be hard to argue against it being the responsibility of everyone involved in IT to ensure that they contribute to the security of the information under their control or management.
All organisations that commission and use information systems, by this stage in the industry’s evolution, should be placing as strong an emphasis on security and other non-functional requirements as they do on functionality. The evidence that this is not currently the case is overwhelming and the ‘Information is Beautiful’ website shows this quite clearly. Data breaches are getting more frequent and bigger. Ransomware has become a pandemic as serious to information systems as COVID is to the human population, and the range of different approaches to both problems, visible around the world, indicates that there is still a distance to travel before there is a consensus on what works best.
The industry needs to take a look in the mirror, individually, as teams and as organisations. Customers are paying large sums and not getting the security or peace of mind they have a right to deserve. Information security isn’t just the CISO’s job (assuming there is one); it is everybody’s job.
Security and leaders
Business leaders and their advisors should be clear that the proportion of cyberattacks that are targeted at specific organisations is relatively small and every organisation and individual is equally vulnerable unless basic cyber hygiene is managed effectively. Certain categories of organisation are at greater risk. Perhaps they have intellectual property worth stealing or are sufficiently cash rich to target for extortion. Maybe they operate in an industry unpopular with certain activist groups. These organisations tend to know who they are, but how many of them can honestly say that they have a solid basis for being confident in their information security regimes?
Are you investing in the long-term security and resilience of your organisation by requiring all new systems to be secure by design, or are you hoping that you can save some money in the short term? Are you hoping you will avoid the data breach nightmare during your tenure?
Yes, the world moves quickly and organisations need to respond and adapt if they are to survive. So, leaders need to think carefully before specifying arbitrary deadlines for systems to be delivered or transformation programmes to be completed. Programme and project managers need to understand and act on the fact that rigorous testing for functionality, performance, resilience and security throughout the development of a system ensures that the systems that are delivered will work as expected. Cutting testing time and effort does not get the customer to where they need to be, quite the reverse.
A final thought for you. If you don’t know how and where you have been breached, it is only because you haven’t discovered it yet.
Remember the days when networks were considered as isolated, and thus defensible? That was a long time ago. It is no longer the case. This may sound like stating the obvious. But there are still organisations out there with corporate networks, including cloud services, with no segregation and little if any monitoring for security.
There is a saying amongst security professionals: ‘if you don’t know it’s there, how can you secure it?’. So one of the key challenges to the architect community must therefore be that of the accurate inventory and configuration management database. Of course, keeping accurate records of a rapidly evolving network is difficult. But it is not impossible and discipline in this area needs to be improved.
Are your networks and systems built to collect evidence of all activity? Is it impossible to alter the activity logs so they can be used after an incident to investigate what happened? Do you keep them long enough? Though the average time from compromise to discovery is generally falling, most sources still report this at around 200 days... a great deal can be learned about your networks, systems, people and organisation in 200 days.
The availability of detection and response solutions and services should mean that no organisation remains unprotected other than by making a decision not to. Time will tell how customers and other stakeholders behave to organisations which make that choice.
Patching and updating is a well-recognised challenge for the industry and many thousands of articles and learned papers have explored this topic to death. It should no longer be an issue, yet it remains so. There are two types of systems affected. Those that can be patched because the vendors are still supplying patches and updates, and those that cannot because either they are out of support. A third category - the ‘too fragile’ - should not exist. If you can patch it, do so. If you cannot patch it but can replace it, consider doing so. If you cannot either patch or replace it, isolate it.
As an engineering discipline, accurate data about everything on the network is vital to smooth and controlled operations. The technology solutions and services exist to help identify configuration changes, access, performance, stability, capacity and availability and to keep accurate records of every such event. It is no longer prohibitively expensive to analyse the patterns of events and activities to identify potential security incidents and investigate them quickly. This is not an art form, it is not magic. It requires discipline, rigour and control to make it look like magic to the organisations and people you support.
Do you know who has access to what systems and what they can do with them?
Ask yourself that question again and this time, be really honest with yourself. Are there no accounts still active that relate to people who have left the organisation? When people change roles, are the rights they no longer need revoked in a timely manner? When someone is terminated at short notice, can all of their access rights be terminated before they are abused? Even to the SaaS services that they signed up for themselves?
When was the last audit of administrator, system and service accounts? Do you insist on administrators using dedicated accounts for administrative activity or do you use a Privileged Access Management solution to provide the access they need in a centrally controlled and auditable manner? Are your processes for responding to incidents, changes and requests effective and efficient and do the originators get regular updates?
Secure code is, in the lifetime of IT, a relatively new concept. In the early days, secure coding specialists were very rare creatures indeed and finding one to read through code for security flaws was a step too far. This is no longer the case and there are tools available which can be integrated into almost every conceivable development environment, to scan code as it is written, and in bulk, for security vulnerabilities and insecure coding practices.
The Open Web Application Security Project (OWASP) has been producing guidelines and valuable insight into secure coding practices for 20 years. Yet systems are still being delivered where it is possible to enter SQL code into a Username field on a login screen. Since the cost of fixing errors in code rises exponentially the later in the lifecycle they are discovered, surely it is in everybody’s interest to identify any error as early as possible?
It is not reasonable to expect every developer to know every trick a malicious actor might use to exploit vulnerabilities in code. Nor is it reasonable to expect any development team to be 100% consistent in the way in which they create systems. Investing in security testing and scanning tools is not allowing developers to cheat - and that argument has been used - but is giving them the tools they need to develop better, more secure code which makes for more resilient systems faster and more efficiently.
Every person, regardless of their role, needs to have the tools at their disposal to allow them to do the best job they can. Today, for application developers, that toolbox must include security scanning and testing tools every bit as much as it includes a controlled code repository and deployment solutions. Is that still debatable?
Why all the questions?
This article started with the question ‘are end users really the weakest link?’. This question, which is frequently phrased as a statement of fact, seems intended to place an unreasonable burden of responsibility on a community of people who cannot be assumed to have any specialist knowledge of IT or cybersecurity. Millions of pounds are spent on security awareness campaigns targeting this community to educate them about the dangers of phishing emails, weak passwords, tailgating and messy desks. Of course this education is important and necessary, but it is not sufficient.
Education and awareness campaigns need also to target leaders, managers, architects, operators, administrators and developers to ensure that the most vulnerable community - not the ‘weakest’ - the users who are the organisation, are protected from the temptation of phishing emails, guided by the system to use only complex passwords, or other means of authentication and know that everyone else on the team is also pulling their weight when it comes to cybersecurity.
The questions posed in this article are intended to get internal conversations going around collectively improving the performance of the entire IT sector, including solution vendors, service providers, internal teams and the purchasers and consumers thereof. The industry can do better, by learning the lessons of the past and bringing security and privacy by design into the core of the systems life cycle, from concept to retirement. The security threats are not going away and they are very real. Are you really doing enough?