Sometimes, we can overlook the true impact of cybercrime on individuals. Mike Sheward, Director of Information Security at Accolade, explains how learning to appreciate and understand the people behind the data makes us better incident responders.

We’re all familiar with the routine. A data breach occurs, the targeted organisation puts out a press release reminding us that they ‘take the security of customer information very seriously’, and they’ve ‘hired a leading cybersecurity company to investigate how the breach occurred’. Then the numbers come in.

Millions, tens of millions, even hundreds of millions of records are impacted. Finally, sprinkle in a couple of years of free credit monitoring for those impacted who wish to sign up for it, and what has become the standard response process is complete once again.

It happens with such frequency, and at such a scale that it can be easy to become immune to the headlines. To accept that this is a justifiable risk in exchange for the convenience of living our lives online. That sentiment is more understandable when observing from a distance, and focusing on the number of rows in a database.

Meeting a victim

However, things change pretty quickly when you’re face-to-face with a real, individual victim. You realise that behind the numbers, behind the gigabytes and terabytes, real people find themselves in the crosshairs of unwanted distress and disruption.

It’s a sobering feeling, but one that, as an information security professional, motivates you to work harder to prevent incidents from occurring in the first place, and better plan your response for when they do.

Information security is a people business. I’ve been fortunate enough to spend the last decade working primarily on incident response and investigating digital crimes, and during that time I’ve met with numerous victims of those events. Recently, I worked with BCS to publish ‘Hands-on Incident Response and Digital Forensics’, a practical guide to those two information security topics.

During the writing process, I revisited several old cases and incidents, and used them to relate the theory behind the discipline to real-life experiences and people. This aligns with one of the greatest lessons I’ve ever learned in my career, that information security, for all the logic, technology, process and anonymity associated with it, is very much a people business.

Nowhere has this lesson been more applicable than in my current position at Accolade, where I’m responsible for the safety and security of millions of healthcare records belonging to our clients and their families. Protected healthcare information (PHI) is highly regulated in the United States, and for good reason.

The risks associated with compromised PHI include everything from identity theft to medical fraud, and even blackmail. Unfortunately, malicious actors are well aware of this, a fact reflected in the value of stolen medical records traded online. Unlike credit cards, you can’t get a replacement medical history or identity in the mail if yours becomes compromised.

My team meets with every new employee to convey this message as part of our security awareness programme. When it’s my turn to deliver the training I frequently use the story of a former colleague as a prime example of what can happen if PHI gets into the wrong hands.

Around six years ago a figure entered my periphery. I looked up from my desk in a large open plan office to see a familiar face I’d seen around but hadn’t really interacted with. This time something was different. The face was awash with a steady stream of tears, and I knew something was very wrong. ‘Is everything okay?’ I asked, ridiculously. The answer came in a wave of tears and emotions as my colleague explained that their partner had recently passed away, but there had been a credit card opened in their name that morning.

Identity theft of a deceased person is an all too common crime. Typically, in this line of work, we’d give technical explanations or look for the indicators as to how this could’ve occurred, but in this case, the person at the end of my desk just needed help. Quick, actionable, reliable help, and they didn’t know where else to turn. Myself and another member of the security team helped direct them to the appropriate resources and even offered to put in a couple of calls.

Things got cleared up, and soon thereafter the source of the breach was revealed, as a health insurance provider disclosed an incident that would’ve directly impacted this person’s information. This event stuck with me, and helped shape my personal approach to identity theft cases. On paper, this was just one of a couple of million impacted people.

In reality, this incident caused a tremendous amount of stress to someone whose life was already incredibly stressful, having just lost their partner. I think of this person every time I investigate an incident, am required to justify the value of a security control, or write a policy. I don’t want anyone else to endure what this person had to endure because I didn’t try hard enough.

The other side of the coin

Of course, people aren’t just victims, they’re also the perpetrators of digital crimes, often empowered by the supposed anonymity that sitting behind the keyboard or touchscreen affords. In the previous example, we weren’t in a position to identify the perpetrators because the incident didn’t occur on systems within our purview.

However, I can recall plenty of occasions where it’s been possible to successfully place a suspect behind a keyboard, even when this may not have been the original intent.

In the early days of my career, while working as a network engineer, I noticed some strange traffic patterns leaving a particular subnet. It was the end of the day, with few people left in the office, so I made the decision to disable a router interface, effectively cutting off network access for an entire building.

My plan was to come in early the next day, contact an IT manager at that site, and work with them to identify the source of the traffic. Being relatively new to the position, I wanted to make sure my thought process was well documented, so I bundled together several screenshots, logs and notes and sent them off to my manager. I wasn’t sure how cutting off network access would be received, but figured at least they’d be able to see it wasn’t a decision I’d taken lightly.

The next day when I spoke to the IT manager, they already had an idea of where the traffic was coming from. ‘It was a virus, I’ve removed it - please re-enable the connection,’ came the response. The phone call ended, and the strange traffic failed to return. Case closed, or so I thought.

A few weeks later a member of human resources came to see me, and let me know they’d reason to suspect that the same IT manager had been violating an acceptable use policy on the network, in quite a serious manner. My actions that day had apparently raised suspicion amongst other members of the IT organisation. I’d somewhat unexpectedly been a first responder in a security incident, and caught someone up to no good.

I felt kind of uneasy. I mean, this would likely cost someone their job. I just wanted to protect the network. I didn’t want to get anyone in trouble. I explained this to my manager, who sought to address my concerns. ‘Today it’s this, tomorrow who knows what they’d be up to if they didn’t get caught. You’re saving them from getting themselves into more trouble.’

With that, I knew this was the work I wanted to do full-time and embraced that mantra in every subsequent incident and case I worked. Always remembering that ultimately, it’s all about the people behind the data.

Mike Sheward’s book, Hands-on Incident Response and Digital Forensics, is available from the BCS bookshop.