Luciano Floridi and Catherine Miller discuss AI, ethics, the public’s attitude to technology and how legislation might the best way to ensure the digital world is a fair one. Martin Cooper MBCS reports

Technology is evolving quickly and as it dashes towards Silicon Valley’s ever-shifting vision of a super-future, there are well-documented winners and losers. If you can afford the latest and greatest gadgets and apps, then you’ll be able to do, see, have, feel and experience more. But, everything has a price: AI could be taking our jobs, our data is being used against us and our gadgets are hostiles in our homes. Tomorrow, if the headlines are to be believed, might be bleaker than Dickens.

But there is a route towards a fairer and more functional future, Professor Luciano Floridid from Oxford University and Catherine Miller from Doteveryone agree: it might just be legislation, not debate.

To start the exploration, Floridi points to Elon Musk and his off-beam prediction - made some years ago - that AI is humanity’s greatest existential threat.

‘The beauty of scholars is we don’t forget,’ observes Floridi. ‘When you say something silly, we will remember it and we will remember it for the rest of your life. That’s what archives are for. And so, Mr Musk will go down in history for saying something absolutely ludicrous.’

We now know the real threats: biology and global warming. Floridi says: ‘we need [AI] to deal with boring reality, not science fiction. The real everyday work we need to do.’ Outlining five very un-Hollywood ways AI can help humanity, he says we should make AI:

1. Work against wrong-doing

AI is responsible for making bias worse, it acts as a force magnified. But, these problems are to do with data - not necessarily the AI itself.

2. Enhance human decision-making and control

AI shouldn’t take away responsibility, it should enable us to make better use of the resources and the time we have. This shouldn’t be done at the expense of operational clarity. The problem is, when we look to manage complex systems like smart cities, this process generates more complexity. As history advances, so does complexity. Here, AI could be a huge force for good.

3. Support human responsibility

Global warming is possibly the biggest challenge humanity faces - in a literal sense. It’s not science fiction. AI can most certainly help society navigate this challenge.

Floridi explains: ‘I want to invite all these billionaires who want to go to Mars to take care of this planet first - just in case we don’t have one to move to in the next generation or two.’

Just because something is interesting, it doesn’t mean we should do it. We need to tackle real, hard problems.

4. Make us more human

Ignore cyborgs and science fiction: the true challenge is the erosion of autonomy. We are surrounded by subtle recommendation systems that suggest clothes, foods, movies and choices. This won’t make the newspapers. It’s not a catastrophic threat to our agency. More, it is us leading us down a potentially dark and unthinking path.

5. Work for humanity

AI isn’t going to destroy jobs, as the headlines suggest. ‘Imagine you are in 1920 - 100 years ago.’ Suggests Floridi, ‘and somebody says: “please predict the future in 1940 or 1950.” Any prediction would have been so completely wrong: it is simply impossible.’

Against this backdrop, it is very hard to make concrete predictions about tomorrow’s job market.

Certainly, economics and economic viability will have a hand to play. Demand for certain skills and business models may reduce. But, again, it’s hard to be definitive today about tomorrow’s market.

Consider driverless cars. On one hand, it’s easy to imagine that lorry drivers are likely to see their skills being replaced by AI. But, lorry drivers do so much more than drive: they check, fill in forms and navigate physical challenges beyond just arrow-straight motorways. There is subtly and scope in job descriptions that job titles might not depict fully.

New roles will be created - the problem is, the people who get them might not be the same people who lost their jobs previously. Reskilling may be a more tangible challenge.

Closing the point, Floridi says ‘it is, however, most certainly not the end of work. Some jobs and skills may disappear and it may be the end of people identifying themselves purely by their job.’


Offering a summary, Floridi says ‘the new challenge is not technological innovation but the governance of digital. Technological innovation isn’t the challenge, it is governance. It’s what you do with it that makes a big difference…. We need a human project… There is no right or wrong direction.’

What the public thinks

But, what does the public think and feel about AI and technology in general? This is the focus of Catherine Miller’s talk, where she outlines Doteveryone’s People, Power and Technology report. Miller points out, through the research, there’s a solid body of evidence about the digital divide and digital inclusion - technology makes people’s lives better and its absence holds back those who don’t have access. But, she asks, ‘do people really like this stuff?’

This question’s importance was amplified by the Cambridge Analytica scandal and the ‘tech clash’ - a tension caused by a misalignment of consumer expectations, technology’s potential and business ambition.

Unlocking the benefit of technology for everyone

‘It’s really important to stress in conversation that people like technology overall,’ says Miller. ‘People think it is making their lives better and, to a lesser extent, it is making society better… People see that technology does good in their lives. ’ But, there is an increasing divide where people with more advantages and privilege feel the internet has helped them more. There is, as such, a real risk of creating a tech underclass.

Covid-19 and lockdown has had a profound impact on perceptions. Miller explains: ‘people went wow! Tech is amazing! I can’t leave my house, but I can still do my job and I can still educate my children and I can still communicate with my friends and family…’ That moment of lockdown was a moment of wonder and appreciation.

‘But,’ she continues, ‘many people felt the tech sector is regulated too little… There was very little sense that self-regulation would work. People are very keen to see government step in.’ Indeed, people are willing, research suggests, to forego some choice and access to content as a side effect of legislation. People are, Miller says, willing to accept those trade-offs.’

So, what is the most effective way to ensure IT is good for everyone? Miller asks: ‘how can we ensure tech companies pay attention to the needs of people in society? [Our research found] a hypocritic oath was seen as the least effective, though it is the most talked about. People felt the most effective would be legislation.’

Continuing, she explains, ‘we want to see legislation that is iterative, adaptable and flexible so you’re not trying to write a rule book for technologies you’ve not seen yet and harms that are also unseen.’

Miller also warns about the tranquilising and distracting effects of too much debate: ‘having a very long-winded debate creates a window for more bad stuff to happen… It has been greatly in the interest of some of the big tech companies to create an ethics debate, which purposefully puts off the point of governance and real stuff they have to comply with for an awfully long time.

The more than 300-page PDFs they upload… about their self-appointed ethics committee… which has been very effective at buying everybody time to keep making facts on the ground. It enrages me; everyone is falling for it. We are three years later than we could have been if we had cracked on and started creating some proper rules upfront.’

Image credit: University of Oxford

BCS Virtual Insights 2020

Find out more about BCS Virtual Insights