As the European Commission’s High-Level Expert Group concludes its draft regulation of AI, Theo Knott, Programmes Policy Manager at BCS explains why he’s in the 0.1% of the population who finds ‘regulatory environment’ and ‘draft guidelines’ exciting.

As an unrepentant policy obsessive I find phrases like ‘regulatory environment’ and ‘draft guidelines’ exciting, A rare area of exception to the rule that these phrases are going to be monumentally boring is around technology. The European Commission’s High-Level Expert Group on Artificial Intelligence (or AI HLEG as it’s better known) is one of the latest groups to put forward a model of how we go about developing AI for the public good. The AI HLEG consists of 52 experts appointed by the Commission and, while their recommendations are not law, they will likely form the core of future European legislation.

The problem with legislating evolving tech

Earlier in the year, the group put out its draft ethical guidelines on developing trustworthy AI. The fundamentals advocate that AI is human-centric, ethical and developed for the common good. While there is ample technological detail within the document, the mantra throughout is that AI is developed in a way that it benefits individuals and society.

As ever, when attempting to regulate a burgeoning technology, it’s hard to be prescriptive around specifics when things are still in their infancy, so the framework offers this overtly values-based approach. It is hoped that this will allow innovation to be designed in an ethical way and we can avoid the sort of desperate retro-fitted regulation that has plagued subjects like personal data and social media.

Our response to the draft guidelines

The guidelines call for a revaluation of the way we think about designing technology. Consequently, the Policy team at BCS collaborated with the Royal Academy of Engineering to scrutinise these proposals and submit a response to the draft guidelines. Our main points ‘for’ are outlined below.

  • The ethos of the guidelines is, for the most part, admirable. Ensuring that AI has a human-centric approach is a sensible top-level aim and building trust in AI as a technology is an important part of achieving this. However, there needs to be further clarification of what is meant by ‘human-centric’ and clearer definitions on debateable terms.
  • The document is currently academic in nature and the challenge will be to translate these principles into usable guidance for practitioners and people working in industry.
  • The main omission in these guidelines is any discussion of the need for better public understanding of AI. Recent successful AI applications have provided world-class performance in a very narrow area. There is a danger that people will overestimate the capabilities of such systems by trusting them beyond their capacity. Additionally, unrealistic expectations about the capabilities of AI systems can arise without an appropriate understanding of how they work.
  • Resultingly, the importance of education, both for practitioners and users of AI, cannot be overstated. Many of the recommendations in the document are predicated on a level of AI understanding not currently present. Some combination of basic teaching to pupils at schools and ethical components within relevant higher education qualifications would help to address this situation.

While we felt the AI HLEG guidelines are broadly correct in what they call for, there is a chasm between where we are and where the recommendations want us to be. A chasm that can only be bridged through increased awareness and education for the people designing AI technology and the people who will be using it.

What comes next?

The next stage for the draft ethical guidelines is a response based upon the comments from BCS and others before formal legislation is constructed. With advances in AI marching forward, time is of the essence, but getting things right is more important still. Now is a critical time where we make decisions that will affect how a revolutionary technology impacts mankind for decades to come. In the case of the AI HLEG, ‘regulation’ is not a boring word.