Tough new EU rules on the use of Artificial Intelligence (AI) will require organisations to meet unprecedented standards of ethics and transparency, says BCS.

The proposed legislation could also have far-reaching implications for companies around the world if they want to provide AI services to businesses inside the EU. Claire Penketh looks at what this could mean in practice.

This proposed legislation aims to establish the criteria by which AI could be used and implemented in the EU. Firms that don’t comply would face hefty fines.

For high-risk applications that could endanger people’s safety or legal status, such as using AI in emergency services, recruitment or asylum decisions, and credit scoring, the systems would have to undergo stringent checks before being rolled out. Once implemented there would be other obligations, such as addressing any bias that arises.

The rules would also say when the use of AI is unacceptable, for instance in its routine use in mass surveillance, including facial recognition and other real-time biometric identification in public spaces. That is unless AI was used to prevent a terror attack, find missing children, or tackle other public security emergencies.

Plus, it would ban the use of subliminal techniques that use AI to influence people to act, form an opinion or make a decision that they wouldn’t have done otherwise.

Why are these new rules being proposed?

The EU acknowledges that AI is a major part of the digital transformation of the world and is crucial to economic growth. It says it wants a set of rules that builds trust in AI and manage its potential impact on individuals, society, and the economy.

‘On artificial intelligence, trust is a must, not a nice to have,’ Margrethe Vestager, the European Commission executive vice president who oversees digital policy for the 27-nation bloc, said in a statement. ‘With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.’

The measures are the latest attempt by the EU to set global standards as a watchdog of the technology industry.

How extensive is the scope of the proposed regulation?

  • Companies could face fines of 6% of revenue if they don’t comply with bans or data requirements.
  • There would be smaller fines for companies that don’t comply with other strict requirements.
  • The law covers developers and users of high-risk AI systems.
  • Providers of what the EU calls high-risk AI, must subject it to a conformity assessment before deployment.
  • High standards would be called for the keeping of logs, datasets and ensuring traceability of results, with human monitoring to minimize risk.
  • The criteria for ‘high-risk’ applications includes intended purpose, the number of potentially affected people, and the irreversibility of harm. (The full list is given below.)
  • National market surveillance authorities will enforce the new rules.
  • A European board of regulators would be set up to ensure harmonised enforcement of the rules across Europe.

What does BCS say about the proposals?

Dr Bill Mitchell OBE, Director of Policy at BCS said: ‘The EU has realised that AI can be of huge benefit or huge harm to society and has decided to regulate on standards for the design, development, and adoption of AI systems to ensure we get the very best out of them.

‘There will be a huge amount of work to do to professionalise large sections of the economy ready for this sweeping legislation.

‘These ambitious plans to make AI work for the good of society will be impossible to deliver without a fully professionalised AI industry. Those with responsibility for adopting and managing AI will need to ensure their systems comply with these new regulations, as well as those designing and developing these systems.

‘The IT profession - and particularly those involved in AI - will in the future need to evidence they have behaved ethically, competently and transparently. In principle, this is something we should all welcome, and it will help restore public trust in AI systems that are used to make high stakes decisions about people.’

Read our full press release

What are its implications outside the EU?

The proposals set Europe on a different path to the US and China, directly prohibiting the use of AI for indiscriminate surveillance and most social scoring. But, much like the introduction of the General Data Protection Regulation, companies around the world may be forced to overhaul their operations if they want to continue to sell to Europe’s consumers or businesses.

However, the rules are likely to take some years to become law, but BCS advises there is a need to prepare now, especially in the area of staff development and training.

The following is the full list of AI systems the new regulations specify as high-risk - that is - the EU deems to present a clear safety risk or could impinge on EU fundamental rights, such as the right to non-discrimination.

Biometric identification and categorisation of natural persons:

  • AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons.

Management and operation of critical infrastructure.

  • AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.

Education and vocational training:

  • AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions.
  • AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions.

Employment, workers management and access to self-employment:

  • AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests.
  • AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships.

Access to and enjoyment of essential private services and public services and benefits:

  • AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services.
  • AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use.
  • AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.

Law enforcement:

  • AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences.
  • AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person.
  • AI systems intended to be used by law enforcement authorities to detect deep fakes.
  • AI systems intended to be used by law enforcement authorities for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences.
  • AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups.
  • AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation, or prosecution of criminal offences.
  • AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.

Migration, asylum, and border control management:

  • AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person.
  • AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State.
  • AI systems intended to be used by competent public authorities for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features.
  • AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.

Administration of justice and democratic processes:

  • AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.