As an information systems specialist, Professor Choudrie FBCS is well versed in how information sources work together. However, it was a recent project that made her explore the possibilities and limitations of AI.
Professor Choudrie begins: ‘If you ask anyone what artificial intelligence is, people will talk about the fact that it's a combination of different technologies. You've got your neural networks in there, you've got machine learning in there, you've got deep learning in there, so in a way it's a bit like an information system - a combination of people, technology and processes.’
AI preserving and protecting the population
In her most recent study, Professor Choudrie recently joined forces with Ketan Kotecha and Rahee Walambe, colleagues at the Symbiosis International University, India, to explore how AI could help in the fight against COVID-19 misinformation.
Choudrie continues: ‘My colleagues in India said, “You are a social scientist. We are the technical people. Why don't we see if we can merge things together?” At the time, the COVID-19 pandemic was striking and so, we started to talk about the true and false information that was coming out.’
It's no exaggeration to state that bad information in times as dangerous as the COVID pandemic can cost lives. Knowing whom you can trust is very hard. The internet is alive with people producing ideas, opinions and facts. But, which facts are truly science based? This, Choudrie says, spurred the World Health Organization (WHO) into action. It urged countries to take stronger action to stop the spread of harmful information.
‘The WHO started to talk about the infodemic and how it was not just the virus, but the misinformation that was going to start hurting people,’ she says. ‘So, my friends in Symbiosis and I started to share updates and links. People will not always be able to identify what is true or false and they could be going down the false path more easily than the true one. So, I sent them news links from The Guardian, or BBC and they told me “Yes, we've looked into The New York Times” and such newspapers too, they also looked into China Daily.’
The problem with AI
Misinformation and COVID-19 prevention and cure is still an emerging and new area of research, so the three scientists took 143 different posts from across the globe and started to train the computers to identify what was true and what was false. Out of the sample, the team identified 81 false posts, or those touting misinformation. However, in training the computer, there were always going to be some false positives.
‘What we have realised, is that artificial intelligence also cannot recognise what is true or false very easily. I've seen examples of chihuahuas and muffins. When you look really close up, it can get confusing. Even humans can’t tell if it’s a muffin or a chihuahua for sure. So, if a human brain finds it confusing, then what about the computer we're training?’
Questions about AI’s wider place
AI is associated with a bias. To ensure minimal to no bias was associated with the AI results, Choudrie explored the decision making of humans versus the technology in its very widest sense. How will it fit into society? Whom will it benefit?
The question of who decides the route AI or machine learning will take, is still in its infancy, she says. By way of illustration, Choudrie points to the EU’s Shaping Europe’s Digital Future policy paper on artificial intelligence. It begins:
‘The European Commission puts forward a European approach to Artificial Intelligence and Robotics. It deals with technological, ethical, legal and socio-economic aspects to boost EU's research and industrial capacity and to put AI at the service of European citizens and economy.’
The critical notion, Professor Choudrie says, is to put AI ‘at the service’ of its people - but people, by nature, are all very different. This led Choudrie and her team to think closely about who will benefit most from AI. Will it be young, well-off people - early tech adopters?
‘We decided to interview older adults,’ she explains. ‘Why older adults? Older adults are people who are very cautious. They are not easily influenced by online social networks. They're living in the real world. They do come across newspapers, hard copy newspapers and online newspapers. But, they're not easily influenced. So, when I interviewed them, I discovered that actually, awareness is a very important issue for artificial intelligence.
‘A computer being trained by us, can be influenced by what we are consciously aware of and not. So, if we are aware of a story, we'll say “this is true” and yet it may be false. I'll give you an example: there was a news link from China Daily and it was a true story. But when I showed it to the older adults, all of them told me it was false. Why? Because of where it was from. The older people didn’t believe it could be true.’
Will older people accept the pace of change?
While older people were used to validate ‘true’ or ‘false’ throughout the experiments, it did lead to questions, not just about misinformation, but about how accepting seniors are of the technology itself. Many people accept the tablet, online shopping and even WhatsApp calling relatives on different continents. But are they ready to accept the pace of change?
Exploring the point, she says: ‘Society is very scared of AI and robotics. But what we have discovered is, just as when computers first came and we were very scared of them, but now we work with them every day, I believe the same will happen with AI.’
AI - making the laws fit the situation
Along with exploring and studying how accepting people might be of AI, Professor Choudrie also sees the law as another potential hurdle for AI to navigate. ‘How the legalities work, could affect innovation. Like in Germany autonomous cars should still have a driver, but if there is an accident, the blame will lie not only with the “driver” but with the car manufacturer - if it’s a product fault. So, there are new challenges and changes taking place in law.’
The use of tech also needs to be considered carefully to take into account, not only legalities, but how ethical it is, which will vary from state to state.
‘For instance, in Sharia law, the lawmakers need to assess where AI falls within five categories of obligatory, encouraged, unrestricted, discouraged, and forbidden. Some believe only God should have the power to create, others that AI is good because it provokes debate.
‘I know Dubai has been working a lot on drones and has an emerging robotics sector that’s really taking off, but I think they will have to look at all these new technologies in combination with their culture and religion. So, for instance there are very strict laws governing where the drones can fly.’
What does the future hold?
While we do give a lot of emphasis to fears that the data will be sexist or racist, that it won’t take into account geography or might bring out our inner snob, there are a lot of amazing innovations that we might miss if we focus too much on the negatives, Professor Choudrie believes.
‘There are going to be evolutions of it across the world and it's going to lead to so many other outcomes that I don't think even we've envisaged,’ she says. ‘Like with the online digital platforms, now we have Uber, Deliveroo etc., I don't think we ever envisaged that we would have these sorts of platforms. As we evolve the artificial intelligence arena, we're going to find new ways of using it. But humans and machines will have to work together.’
Will AI ever be truly inclusive?
Will data ever be truly representative of gender, ethnicity, religion - the list is as vast as the billions of people on the planet. Can the dataset ever truly serve society? Will datasets be deliberately homogenised for societies with less diversity? How will that influence the outcomes?
‘AI is going to have to include everyone. That’s the only way we can all have access and the bias will go down. There are certain people who will put their prejudices and biases into the data. We have only to look at the A-level situation here in the summer. Disadvantaged communities may not have had such good results because the data was being supplied by people with those prejudices.
‘Our biases still remain with us, unconsciously or consciously. That is not something to be fearful of but something we can use, understand and apply to get data out with the different perspectives.
‘As the years go by, we are becoming more diverse. We are no longer in this continent and have only these people. All our policies need to be informed of all these different types of people and situations, otherwise it will be only certain groups of people who are driving the data and how the gaps will occur.’
Will we accept robots?
In an interesting social experiment, the Japanese have been tackling the social demands of an ageing population for decades. As the workforce shrinks and the number of people actually able to work in care homes decreases, the Japanese have started to introduce robots to care for, stimulate and exercise the elderly. Will it succeed here in the UK?
‘It’s already happening,’ Professor Choudrie is matter of fact. ‘At the moment, Bedfordshire and if I'm not wrong, Middlesex, have got robots in care homes. They're testing them out to see whether older adults will accept them.
‘If they were completely like robots and very cold, people probably wouldn’t accept them. But apparently, they are very nice; they speak well and are soft to the touch, so people are accepting them much more.’
This idea of machines working with humans, doing the jobs we don’t want to do has been the stuff of science fiction for many years. And although we haven’t got to the stage of every house having a robot housekeeper, we are certainly in the realm of having machines to autonomously vacuum the carpets and cut the grass. Little by little, science fiction has become science fact. Machines are now used in mainstream retail to recommend books on Amazon or suggest what we might like to watch next on Netflix. But what about innovations that feel more ‘dangerous’ to society, such as driverless cars?
‘There is also the challenge of getting people to accept it, for instance in Singapore they've got driverless buses. How are they getting people to accept it? They're still trying it out in Nanyang and other universities but older adults actually do like them, because they go very slowly.’