6 January 2021

The government’s decision to not go ahead with GCSEs, A-levels’ and AS-level exams this summer is understandable considering the impact of the pandemic on students learning. But BCS, The Chartered Institute for IT is disappointed that algorithms, in general, are being blamed again for the exam results crisis of last year.

Dr Bill Mitchell OBE, Director of Policy at BCS said: “Last year an ill-conceived algorithm was used to award exam grades to students, which assumed each school’s results had to be effectively identical to the previous year. That caused a huge loss of public trust in the very idea of an algorithm being used to make judgments of any kind.

“The real issue was poor communications and poor collaboration between government offices and departments because they were designed to be arm’s length, rather than closely integrated partners, resulting in an algorithm designed to do something woefully different to what politicians wanted.

“It is damaging to our future as an advanced technological society to blame an algorithm when the true problem was a collective lack of technology governance, which can be fixed should we choose to.

‘Please let’s all work together to sort out the design and development governance for algorithms in public service, not just shoot the algorithm. It only did what it was told to do, that is the point of algorithms.”

Last summer, in the wake of the exam results crisis, BCS carried out a survey which showed over half (53%) of UK adults said they had no faith in any organisation to use algorithms when making judgements about them, in issues ranging from education to welfare decisions. Just 7% of respondents trusted algorithms to be used by the education sector.

In a separate report, ‘The Exam Question: how do we make algorithms do the right thing?’ BCS recommended the government endorse and support the professionalising of data science, in line with a plan already being developed by a collaboration of the Royal Statistical Society, BCS, The Royal Society and others.

It would mean algorithms whose creators did not meet a strict code of practice for ethics, professionalism and competence cannot decide issues such as exam grades or make estimates about the outcomes of pandemics like COVID-19 to government.

We need more professionalism in this area of AI, so that it can be used for the greater good, ethically, and efficiently. We want to see a professionalised data science industry, independent impact assessments wherever algorithms are used in making high-stakes judgements about people’s lives, and a better understanding of AI and algorithms by the policymakers who give them sign-off.

Contact the Press Office