From cyberbullying to stalking and transphobia, online harms are damaging lives. A panel of BCS experts dissect the problem and explore possible solutions.

Understanding online harms and how to fix the problem
Play video

‘If you have a disconnect between your words and their impact, you are more likely to say them’

‘Platforms are also creating other ways users can protect themselves - like muting other users. But, does muting abusers work? Indeed, is tactically muting a good idea?’

The world wide web is undoubtably one of humanity’s crowning achievements. It has reshaped the world by allowing those who are connected to communicate and share information in ways, which only a few decades ago, would have appeared like science fiction.

But, dig bellow all that is good and great about the web and you’ll find a hard and unpleasant reality: the word wide web can also be a terrible place to be.

Online trolling, bullying and stalking are rife and they are making the real lives of victims intolerable and, in some cases, unliveable.

Understanding online harms

‘Being online is a force magnifier,’ says Abigail Simmons, founder of Trans Tech Tent. ‘It allows people to say whatever they like and they project that voice to people around them.’

The problem is, Simmons says, that an utterance - if it is transgressive - often goes unpunished. And, along with being unpunished, a harmful post is often upvoted, up-ticked and applauded by likeminded users. ‘This allows abuse to multiply online. It’s all adding to making the [online] world a much less welcoming place,’ she explains.

The question is, of course, why do some people feel able to say transgressive things online? Does the web cast some kind of unwelcome magic which makes users feel disinhibited? Does being on the web make people feel able to behave in ways they may not choose to do in their daily lives, offline?

How being online enables online abuse

‘You can answer this question in so many different ways,’ explains Dr Emma Short, Associate Professor in Psychology at De Montford University. ‘But, context is very important. Some sites are notoriously more aggressive than others.’

To illustrate the point, Dr Short points to so-called ‘dragging’ sites. Sometimes called trashing sites, these forums are dedicating to following every move of someone.

They usually focus their attentions on somebody (un)fortunate enough to have a high degree of online visibility: journalists, bloggers, vloggers and celebrities. All under the guise of transparency in social media coverage, the drag site communities, in reality, focus their energies on slating everything the subject does.

‘They focus on taking [the person] down,’ Dr Short says about the drag sites. ‘And when you get escape - spillage - from that world on to the mainstream, like Twitter, the degree of hate feels really disproportionate and frightening.’

Dr Short also points to the racist attacks that took place on social media following the England team’s defeat in the Euro 2020 finals.

‘When there’s something [big] happening, there is has an enormous wave - like the Euros and the racist attacks,’ she says, explaining how real world events gather momentum and can create a tsunami on social media.

‘Equally, though, those [football] events were met with a really positive response. There was a defence and really positive counter speech,’ she recalls.

Beyond content and situations unfolding in the real world, Dr Short also holds that online hate speech has its roots set firmly in psychology. Specially, she points to John Suler’s work on the online disinhibition effect.

The psychology of online abuse

Suler’s 2004 work found that, while online, some people self-disclose or act out more frequently or intensely than they would in person.

‘You can start with the idea of anonymity,’ Dr Short explains. ‘That is very enabling. But, even when our identity isn’t concealed, you have invisibility. People might know who we are but can’t see us. When we’re in the process of forming our communications, nobody can see us... so, you don’t get the normal social cues - like eye rolling, for example, if you’re beginning to say something terrible. And, equally, you can’t see the person’s response in real time as they read the message.’

‘The internet doesn’t have a “dark side”,’ says Professor Andy Phippen from Plymouth University. ‘It is just a collection of cables, wires and routers. Society has a dark side and it is reflected on the internet.’

Society and humans make the internet

‘If you have a disconnect between your words and their impact, you are more likely to say them,’ says Professor Phippen. ‘If, twenty years ago, you had somebody sat on a bar stool shouting racist abuse, most people in the pub would tell them to “shut up”. Now, if somebody goes online and shouts the same abuse, thousands of people will like their comment and say, “Well done, you’re being really brave saying that.” Regardless of how abhorrent your view is, you’ll have people who will facilitate it and, as a result, you’re more confident expressing that facilitation. That’s one of the challenges we have online.’

Part of the problem, Phippen says, is that there’s a comparative lack of education in the area. ‘As a result, everybody brings their own value biases. I spend a lot of time with young people who will point-blank refuse to report online abuse because there’s no point. Nobody will do anything about it.’

We’re all groping in the dark, he believes, and we’re all learning from our own experiences. On a social level, this isn’t a problem. ‘But,’ he says, ‘if you’re a police officer, a teacher or a politician - the lack of STEM knowledge within the political space terrifies me... telling revenge-porn victims they shouldn’t have taken pictures of themselves [for example]. Emma Bond at Suffolk University did a lot of research into whether police received any training in image based abuse.’

Living the reality of online harms

‘I’ve had instances I could report,’ echoes Simmons. ‘The problem is, you have to get through to people who understand: the “five percent”. The five percent of officers, or the five percent of administrators. If people don’t understand, they’ll either shrug or send you off to a different organisation.’

Against this backdrop - a place where society, psychology and opportunity all contrive to enable online abuse - it begs the question: are technology firms doing enough to staunch the problem?

‘These days, I think tech firms are trying to do quite a lot,’ says Phippen. ‘They’re providing tools for blocking, reporting and muting. The challenge is in getting people to believe that those tools will be effective. One of the good things coming out of the Online Safety Bill is the fact that transparency reporting is going to be a legislative expectation. This means that companies will have to provide details on reports they’ve received... the number of accounts they’ve taken down, the number of reports they’ve upheld... that’s all really positive stuff.’

More technical solutions

Platforms are also creating other ways users can protect themselves - like muting other users. But, does muting abusers work? Indeed, is tactically muting a good idea?

‘Muting is a good idea,’ says Dr Short. ‘Particularly if you’re being caused a lot of distress... I’ve [experience] of stalking - fixated abuse: pursuit where you are being targeted and it’s likely that the risk is escalating. So, I’d suggest going to the account where the abuse is coming from, looking and asking: are they generally abusive - are they abusing everyone? If they are, you’ve got caught in the crossfire. If they are just abusing you, though, I think it is quite important to keep that account visible so you can assess the risk or you can seek support and you’ve got the evidence, should you need to take action.’

Phippen, however, cautions against the view that solving online abuse is simply a task for software. ‘Software,’ he says, ‘is a tool - racism is social problem.’

‘There are plenty of technical solutions that are trying to solve this problem,’ says Simmons. ‘But, technology chasing a social problem - it just doesn’t work. You’re chasing high technology without the underlying ethics, without the underlying social change and progress that you need.’

Is AI the answer?

It is, of course, tempting to imagine that AI might be the silver bullet that’s needed to solve online hate. All we’d need is a suitably trained system which could vet posts before they’re made live and, if they’re found to be distasteful, offensive or even illegal, the AI could consign them to the digital dustbin before they saw the light of day.

‘AI is great at narrow systems with a lot of very specific and focused data,’ says Phippen. ‘The best example is tumour diagnosis. You can throw a huge number of images of tumours at a recognition system and it’ll recognise them [based on its training]. Take that to a massively open system such as Twitter and say: “Right, now identify racism. Here is a corpus of racist terms - if you see terms like this, they might be racist.” It just won’t work.’

For you

Be part of something bigger, join the Chartered Institute for IT.

‘If you are looking at large open and complex systems you can identify something like [specific] racist terms - that’s quite easy.’ But Phippen explains, language and its usage is a very broad, changing and unfixed business. For one group, a word might deeply offensive, while for another, it might be a term of unity.

There are, of course, certain swear words which have many lives and usages beyond summing up bodily functions. One word can convey surprise, displeasure; it can be a verb, a noun or be used to add emphasis to other words.

‘It’s the classic example,’ explains Phippen. ‘Porn filters... why does Scunthorpe still get grabbed by porn filters? If you’re identifying things through sexual keywords, you’re going to have a lot of false positives. And, having false positives when you’re accusing people of racism isn’t useful. AI just doesn’t do these things in a complete and reliable way. That makes it very dangerous in open systems.’

So, can we influence the AIs used by social media platforms? Can we, through down voting, reporting, silencing and interacting with content we dislike start to train those AIs into recognising posts as unwelcome and unwanted? The answer seems to be that it’s unlikely. Transgressive speech and speakers can have lots of lots of approving followers but often only one victim.

So, what’s the answer? Our speakers agree that there’s no silver bullet that can solve online hate. Rather, we all have our part to play. As responsible users, we can support victims and report hateful speech to platforms. Equally, many different agencies all have a part to play: educators and education; politicians and policy makers; law enforcement; advertisers who use platforms to sell; high profile users’ boycotts - each of these can add to a growing pressure. It’s also equally clear that leaving tech firms to self-regulate is an approach that isn’t working.