Brian Runciman, Head of Content and Insight at BCS, guided the day’s events, which included a host of guest speakers from industry and academia, as well as a number of debates about inclusion, age, skills and AI. The opinions of the audience were sought throughout the day using the Sli.do app. This is a brief overview of how the day panned out.

Starting the conversation

Taking the stage first was Adam Thilthorpe, Director for Professionalism at BCS. His speech was an assessment of change within the IT industry: how companies that were never orientated towards IT are now dealing with digital transformation, daily. For companies looking for that baseline on which to build a new strategy for their company, there’s BCS. The charity offers hope, inclusion and a forum in which companies and individuals can gather to benefit from open debate and conversation.

Next, Adam took a look at ethics, trust and tech. He questioned whether people working in Silicon Valley were developing the right tech: yes, apps could help people order a pizza or a taxi without direct communication with another person, but was that really making IT good for society? And when would the normal rules in society apply to platforms such as AirBnB and Facebook, who happily brush off responsibility for being ‘just another app’?

He challenged whether there was a need to reimagine and reunderstand ‘what good looks like’ in tech and left us pondering whether IT and digital experts would one day need to be sociologists, too...

Diversity is also a business case

Taking to the stage next was Vinous Ali, Head of Policy at TechUK, the UK’s trade association for the technology industry. Representing more than 900 member companies and some 700,000 people in tech, the organisation focuses on areas such as defence, national security, national infrastructure, intelligence and organised crime.

TechUK is devoted to making tech good for the UK. That means ensuring the UK remains an attractive destination for investment, driving productivity, offering opportunity for more and solving some of the largest problems in society. In order for technology to be diverse, it needs to be created by a diverse workforce. Tech needs to be representative. Vinous explained that in the IT sector, 15% of the workforce is from Black, Asian and other non-white minority ethnic backgrounds (BAME) – although not entirely representative of the UK societal makeup, it is better than most sectors. However, BAME people generally aren’t in leadership roles.

There is also a business case for being diverse. UK companies are saying: ‘if I had more people I could continue to grow faster’. The tech sector is continuing to grow, but is hindered by not having people with the right skills. Equally, the tech that will determine whether more of us will get a university place, or a mortgage, for example, is being put together by AI – which has often been shown to be biased against women and people of colour.

Vinous urged for change in the way the UK recruits for tech positions: ‘companies looking to hire are typically fishing in small pools of talent - university graduates, for example - which are arguably in and of themselves not diverse pools.’ Talent needs to be utilised from ethnically diverse backgrounds and in order to promote diversity, recruiters need to insist on a diverse range of candidates. That said, even recruiting diverse candidates doesn’t mean companies will retain their new employees. People need to feel they belong and it’s only by making everyone feel included, by mentoring and encouraging our colleagues, that companies can build diverse teams. In essence, we’re not in a bad position, but we could be in a better one: we have strength in diversity.

Small changes can change the world

Motivational speaker, vlogger and advocate of social inclusion, Martyn Sibley entitled his talk ‘How to change the world’. Martyn, who has spinal muscular atrophy, has been in a wheelchair since the age of three. Growing up in a small, rural village in Cambridgeshire, inclusion wasn’t a choice, it was a default. His talk reflected not only on his inclusive education and largely inclusive upbringing, but also his determination to overcome physical barriers and break down social prejudice in terms of environment, attitude and policy.

Technology has made his life easier and offered him a degree of independence that previous generations simply wouldn’t have had access to: his specially adapted car, his care, the equipment he uses in his house – even public transport. His argument is that making small differences both at home and at work, will change the world. However, he does believe that inclusion is a two-way street: there needs to be acceptance and a willingness to grasp all life has to offer when barriers are removed.

Martyn doesn’t let disability choose the life he has. He’s adventurous, has a positive mindset and uses his unstoppable lifeforce as a superpower. He travels – saying Barcelona is one of the most accessible places in the world for disabled people – and he writes a blog that attracts around 50,000 readers every month. In short, he asked everyone in the room to use their ‘inclusion superpowers’, because it’s only when we’re aware of social problems and obstacles that we can all work to lessen their impact.

Promoting inclusion

After losing her eyesight in her twenties, Léonie Watson became an accessibility engineer She’s now director of TetraLogical, a tech company that works with existing and emerging companies to create great code that understands and addresses the needs of everyone.

Her talk, ‘I, Human’, was about the use of tech to promote inclusion. In a world where AI is becoming mainstream, there needs to be a change in perception. Humans are not always rational: we can hear, see, think, speak. We use around 30% of our brain to process visual signals; speaking takes up 50% of our brainpower; hearing takes up 3% of our brainpower.

As humans, we have the ability to understand each other. We can understand gestures and translate different languages. When humans have problems seeing, hearing, or in cognitive processing, there are ways we can overcome them: we can listen to speech and translate it into text; we can create synthetic speech for those who can no longer talk. As tech changes lives, we have to re-evaluate what we think we already know.

Taking a journey through time and literature, Léonie talked about authors who dreamed of machine learning: from Apollonius’ Argonautica, featuring Talos the automaton (3rd century BC) to Butler’s 1872 book, Erewhon (which George Orwell praised, saying it needed ‘imagination of a very high order to see that machinery could be dangerous as well as useful.’ – how times change!) And Isaac Asimov’s story collection of 1950, I, Robot.

Leaving science fiction behind and looking at science fact, Léonie explained the evolution of computers that today can talk and even identify the differences between objects with machine learning, making the world more accessible:

  • 1961 The newly developed Cantando Daisy Bell has computers singing for the first time.
  • 1993 Apple develops voice recognition.
  • 2013 Kinetic sign language translation is launched.
  • 2014 Amazon Echo becomes part of our homes.
  • 2016 Face recognition software is in use, and by
  • 2017 Microsoft Seeing AI gives the visually impaired voice descriptions to navigate the world around them. From describing and recognising people, reading text, signs, door numbers, food labels and handwriting - it even facilitates payments.

Léonie also explored what happens when machines are taught human behaviours that are not so desirable. In 2013, IBM’s Watson learned to swear - and took to it with great gusto! In 2016, Microsoft’s chatbot, Tay, was taken down just 16 hours after launching - for posting offensive and inflammatory tweets from its own Twitter account. Learning through interaction made Tay replicate the language and ideas gleaned from unpleasant encounters with internet trolls.

What we can learn from Watson and Tay is that the data fed to machines by humans is incredibly important. If computers are taught the wrong things, they will learn to behave badly.

I, Human and I, Robot are one and the same: AI amplifies human ingenuity and when used in the right way, extends our capabilities, breaking down barriers for users. Now, it’s about augmenting the good and using the right tools to do so.

The computing boys’ club

Speaking on video, BCS Past President Dame Wendy Hall shared her thoughts on equality in IT and her role as the first Skills Champion for AI in the UK. Wendy is Regius Professor of Computer Science, Pro Vice-Chancellor (International Engagement) and an Executive Director of the Web Science Institute at the University of Southampton.

She wrote her first paper about the lack of women in IT back in 1987. In the late 60s and early 70s, there were lots of women running computer departments but, she explained, ‘as the PC age dawned, the women disappeared.’ Computing had become a boys’ subject with a male culture that had sent the figures for women in the industry plummeting. ‘And it’s not getting better,’ she said, ‘if anything, it’s getting worse.’

Wendy said that when she travels to the Middle East, Singapore, etc, 50% of the students in IT are women, so it’s not genetic - it’s cultural. The lack of women in IT in the UK has to change and we all need to facilitate that change - because if we don’t, the risk is that there will be huge issues of bias in data algorithms in AI If it is only men who program machines, then it is inevitable that platforms will have gender bias.

Can we trust AI?

Luciano Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford, where he directs the Digital Ethics Lab of the Oxford Internet Institute and is Professorial Fellow of Exeter College He is also Turing Fellow and Chair of the Data Ethics Group of the Alan Turing Institute.

Addressing the delegates in the first afternoon session, Luciano compared how AI can be used (opportunities) and how it can be misused (risks). He explained that ours is a community in which differences can be made; that there are issues, but they can be tackled in collaboration. We can all make a difference

AI is not one subject, he clarified, but a huge range of ideas and applications: each of which contributes to the processing, storing or spawning of vast swathes of data. How we deal with the amount of data that we have generated in the last decade alone poses significant ethical as well as logistical questions.

Aside from data, Luciano explained: ‘AI is going to affect who [we] feel [we are]. It will challenge how [we] feel as [individuals]. It will reformulate personal identity’ and is even ‘eroding human self-determination.’ He gave the example of targeted advertising, search results and news stories unconsciously influencing what we think are personal choices or decisions, which have really been tailored for us by algorithms So, whilst AI has the ability to promote societal cohesion and improve skills, it can also remove responsibility and reduce human control. The more we delegate the decision-making process, the more we risk not knowing what’s going on.

Can we really trust AI?. There is currently a draft European directive saying what is ‘right’ or ‘wrong’ in AI, but academics have been discussing AI ethics since the 1960s. We make mistakes; we try to do the right thing, but sometimes we are just limping through, and so the directive can’t just be a list of ingredients to retro fit what is already being done in AI – that doesn’t work in the field of ethics. We need ethical and legal processes in place now.

We are all part of the project; we’re all trying to solve a problem – applied ethics, autonomy, justice and fairness. We’re still trying to do this in a trial and error way, but what we really need is a huge transformation.

Balancing data and privacy

Ivana Bartoletti is the Head of Privacy and Data Protection at Gemserv, and a founding member of the Women Leading in AI network. Her background is in privacy law; now that more companies are using AI and analytics, privacy is back on the agenda. She began by explaining the complexity of the concept of privacy.

Data and privacy change so much in culture and with time, so how do we pull them together? When Ivana went to law school, she was taught that privacy was about collecting the least amount of data possible - now, it’s about collecting everything. We like our fitbits, googlemaps, online shopping… all of which ask us to yield data about ourselves. But what if a person refuses to submit to facial recognition on the street; should they be fined? In this age of big data, we need to redefine privacy.

Why does it matter?

Algorithms need a lot of data to recognise patterns, but what about the problems of reusing personal data? Ivana explained: ‘If I undergo a health screening, that data could be used not just about me, but also about my family [who have not consented to the health check]. Decisions by one person might affect many.’

In the justice system, algorithms based on patterns could be used to calculate what your rate of reoffending might be and therefore how long you should be in prison. What about algorithms that can assess whether you’re entering a depressive stage of your life based on observed behaviours or choices? Characteristics that suggest your sexual orientation? Looking at your online behaviour, organisations can micro-target you. If shopping companies target you, that might be considered okay. But, what if you’re vulnerable and gambling companies start to target you? Is that okay?

What if your viewing preferences change, what adverts you will receive? What political information? In regards to privacy, there is a need for change within the advertising ecosystem, as targeted, personalised information could impact democracy. If we all have access to the same things - great. However, if we’re always experiencing something different, then where’s the baseline to talk and debate about the same things?

To retain our human autonomy, we must have the right to not be monitored or tracked all the time. Ivana advocates that there needs to be privacy by design and differential privacy. There needs to be a new movement that adds noise to the arena.

For example, the shift could be as small but significant as internet search suggestions for where to have coffee being based on a general nearby location (such as a building or a school) rather than precisely where you are at that exact moment.

The fourth industrial revolution

Next, from the newly formed Office for AI, Deputy Director Sana Khareghani addressed the audience: ‘Why is there so much conversation these days about AI? Why AI? Why now?’ The answer is because there has been such a vast increase in data and computer capacity; we are in the midst of the fourth industrial revolution. The NHS now invests around £79m in AI technologies to diagnose disease. AI can now identify and take down around 94% of jihadist videos within minutes, helping to tackle extremism.

Many nations are investing heavily in AI, to become the forerunners of a technology that’s set to change the world: China and the UK are responsible for 82% of AI and the majority of companies who have double digit growth intend to implement more AI. While there are a lot of questions about ROI and whether AI is worth it, it’s worth noting that London is the second largest AI start-up region in the world.

The AI council will have an expert committee of individuals who will amplify the message about the power of AI. The Centre for Data Ethics and Innovation will be there to balance the uses of AI and to answer the difficult questions. Sana explained that the Office for AI has worked with the Open Data Society and put together a council which will be meeting soon. ‘We have been working on early diagnostics and trying to determine how to use AI to diagnose disease. We need to ask of AI, how can it be used to provide better public services?’

This country has a thriving ecosystem as the birthplace of AI, but we need to improve our skills. The competition for AI know-how is intense, with demand far outstripping the supply. Up to the 1960s, 40% of computer scientists were women, reiterating both Dame Wendy and Vinous Ali’s earlier points: we need to fill the skills gaps by attracting a range of people to IT. We need the people who are making AI to be representative of the society in which we live – that’s the only way we will make AI representative.

Thinking machines

A distinguished engineer at IBM, James Luke says: ‘I’ve spent my whole life dreaming about thinking machines.’

The fascination for the subject of AI has taken him from a childhood fancy to a career that – for the last 25 years – has centred around building real AI applications for real AI problems. As data has become more available, so companies are now ready to embrace the next step in AI. The challenge is to think about how to engineer around the algorithms.

James’ career began in 1994, when he wrote to an F1 company offering to fix their AI problems. However, what he discovered was that making AI work for real is HARD. It is incredibly important to identify the right problems to focus on and ensure that the relevant data exists. When you go into a client situation and the data is already labelled, you can use a machine learning system, generate a set of results to prove ROI and deploy it.

Surprisingly, the computing engineer actually advocates using a human where possible to solve problems: ‘If there is a problem humans can do, but can’t scale up, then bring in a computer. ’ It is often unrealistic to expect a digital solution for a problem we haven’t already defined. For instance, identifying disease is a good example of where both human expertise and AI power can be used effectively. The human contribution is important, as people can ensure the AI has the right datasets; they can also cleanse the data the machine is learning from to ensure more accurate identification.

In order to help them understand the concept of problems with clearly defined parameters and the necessity of quality data, James tells his teams to make up a world: create a set of rules, then take that data and apply the AI. They don’t often get it right straight away, but they soon learn.

If we want to be successful in AI we’ve got to stop thinking only about algorithms and start building end to end systems.