Joseph Rose is a senior solutions architect specialising in infrastructure and security with a varied career across banking, financial services, insurance, defence, aviation, utilities and more. Here, he tells Johanna Hamilton about the changing face of cyber security...

I graduated in computer science and software engineering back around 25 years ago and went onto jobs with a mixture of network infrastructure and software development. Then within a few years, I found myself leaning more towards the infrastructure side.

In about 2005 I became a technical architect and from about 2008 onwards security started to come into what I was doing more and more. At the time, I was working for a company that was doing gambling and gaming, so a lot of it was increasingly going online - PCI compliance started to come into what I was doing.

Going forward from there, steadily more complex programmes, more complex setups, larger numbers of users, greater focus on uptime reliability, scalability, so really the security and the infrastructure architecture elements of the job started to grow in complexity and scale.

With greater scale comes greater vulnerabilities. How do you keep on top of it all?

Quite a few companies are behind on core patching and configuration management - just keeping software up to date and knowing what software you have. Having a good inventory, knowing where it’s being used, knowing when it’s coming up for renewal in terms of software licences and things like that and making sure that the base operating systems are keeping pace and everything’s supported.

Any impetus around patching used to come mainly from the operating system provider, so you’d have concepts like Microsoft’s ‘patch Tuesday’.

Just being aware of all the vulnerabilities that exist in your estate and how dangerous they are on a score of 1 to 10 is important. If you have weighted risk scoring on top, so that you know which vulnerabilities have known exploits, helps with prioritising what must be fixed urgently. An exploit is when someone has actually written something to trigger the vulnerability, that you can download, either from the ‘dark web’, from the public domain, or from ethical hackers. It’s all about awareness and prioritising remediation, with a view across the entire estate.

So day-to-day is your job more analytical or technical?

It’s a bit of a mixture. On the one hand there are a set of projects where we need solutions and deliverables to be produced so I’m doing some of that. On the other hand, I’m looking at issues in the estate where something is about to break or is nearing ‘end of life’.

The end of life programs are more problematic, as we have to do something. There are no more security patches and if an exploit is found against an old vulnerability, we won’t be able to fix it - remediation or upgrades suddenly become pretty important.

Along the way there are little ancillaries or features that weren’t available when the current solution was put in that are now considered part of best practice and will help you solve other problems.

As an example, there might be a public key infrastructure that serves digital certificates. Kind of invisible unless you’re building a website and you want to have it using secure traffic or https, you need some sort of an SSL Certificate, that’s got to come from somewhere - public facing websites can be signed by third party Certificate Authorities, but internal secure communications needs a reliable, available, PKI.

Often, if it’s not one of the new emerging tools or new emerging issues people are like, ‘Can’t somebody else work on that?’ It can be quite hard sometimes to get a responsible attitude, or enough time allocated due to resourcing challenges.

Do you feel there’s a lack of team accountability?

Some things running underneath production services may not have been architected - they were just built to serve an immediate need, without taking a wider view on the overall requirements. What I’m trying to say is that somethings aren’t scalable and not available in all the right places.

As regards internal data systems and cloud service providers, if the company is busy migrating to Amazon or Azure or Google Cloud then all those things need to be considered because you want to be able to easily consume the certificates, in the world of cloud that’s more important than ever to be able to do that quickly and not hold things up. But somebody’s got to actually be managing the security worthiness of the PKI and makes sure that it’s up-to-date and the certificates that it’s issuing are strong enough and they’re not subject to vulnerabilities themselves.

Tell me about business transformation within cyber security?

It’s interesting that while some of the teams that I’ve worked with in this area, understand formal modelling of processes and ‘the right way’ to do various activities, they often don’t go all the way on this journey. They might do process modelling, but they might look at a hundred per cent process simulation but they very rarely do process execution. So ideally, we model the processes, simulate them with real values and check that the right answers are coming out and then actually push into some sort of workflow engine.

One of the ways to try and show that you’ve met all the security design principles is to produce something called a threat model. This will show both a happy path into the system - so how an external user or an internal user may access the service - then another path if a malicious actor tries to go around the security, with the threat model showing how compensating controls defeat the malicious actor’s attempts to subvert the system or exfiltrate its data.

How do you deal with security in an ever-more connected world?

I suppose it’s important to start with the view of the threats. If you go back 15 years, the sole concession towards security very often was anti-virus on desktops and a firewall. Maybe two layers of firewall if it was a very large company. The complexity of IT projects and the drive for more automation and more self-service has led to larger project teams, and more focus on security being delivered as part of any new solution.

Companies can’t usually get approval to hire a big chunk of permanent staff so invariably they reach out to outsourcers, contractors or consultancies. They realise that, depending on how these people are onboarded they might not know a lot about them. They don’t know their motivations for being there and it’s possible that a competitor could place somebody to learn more about this business, from the inside.

Can you guard against insider threats?

There are a number of compensating controls to project against insider threat. Management is one, to make sure that you can’t have your critical processes exploited. Sorting out privileged access management is another. You can actually grant time-based access to production systems, to engineers based on a manager’s approval but basically the administrator or engineer never actually sees the password - there’s no opportunity to steal credentials.

Another one is multi-factor authentication, something to try and guarantee that somebody is not logging in using stolen credentials. They have to type in a number from an RSA token, perhaps from a phone app or a physical token, or use a FaceID or TouchID credential taken from their phone. Another critical control is file integrity monitoring, which is when a file has changed that is required to be in a certain state for an application to run. Alert on that and the firm’s software can roll it back to the known good state. So those are just a few of those things that can protect against threats.

Have you come across insider people who try to change systems to threaten business integrity?

I haven’t personally come across people working in an IT department with the deliberate intention of breaking in and subverting. More often than not, when hacks are occurring, they’re coming from outside, but the issue is detecting that they’re in. Sometimes you can detect that they’ve been in, but it could be a month later.

Usually the only effective defence is to have some sort of continuous perimeter monitoring. So, you’re looking at the security of your web-facing systems. They will basically tell you that certificates are expired or certificates are of insufficient strength and if you have SSL certificate using an old hash algorithm, such as SHA1 - there are various ways that you can downgrade the security so you could potentially get onto that system.

Then once you’re actually on that system, you might be just in the nuisance bracket. You might be able to go in and then there’s a file on an Apache webserver called .htaccess. If you find that that’s been left open, say a test system got accidentally promoted to production status and there’s no permissions particularly been set on the .htaccess file, you could go in there and execute an ‘all users deny’ command and basically everybody who tries to access the website after that won’t be able to load the home page - this is not a denial of service attack using a volume of connections, but it stops real customers from accessing the site.

How do you see your role changing?

I’m already doing a lot of remote working. I might be paid out of London but I’m working on global projects, which requires me to be flexible in-turn about my availability. With increased remote working, it’s tempting to do supplier engagements totally by phone, but I still believe that it’s very useful to build the relationships with the technical people actually at the business, because ultimately the fact you’ve made the effort to go and see them is always reciprocated in some way. I’ve been a contractor for many years and I’d say there’s certainly more potential to do more international work, despite Brexit maybe making things more awkward in the short term.

Similar from BCS