Tough new EU rules on the use of Artificial Intelligence (AI) will require organisations to meet unprecedented standards of ethics and transparency, says BCS.

The proposed legislation could also have far-reaching implications for companies around the world if they want to provide AI services to businesses inside the EU. Claire Penketh looks at what this could mean in practice.

This proposed legislation aims to establish the criteria by which AI could be used and implemented in the EU. Firms that don’t comply would face hefty fines.

For high-risk applications that could endanger people’s safety or legal status, such as using AI in emergency services, recruitment or asylum decisions, and credit scoring, the systems would have to undergo stringent checks before being rolled out. Once implemented there would be other obligations, such as addressing any bias that arises.

The rules would also say when the use of AI is unacceptable, for instance in its routine use in mass surveillance, including facial recognition and other real-time biometric identification in public spaces. That is unless AI was used to prevent a terror attack, find missing children, or tackle other public security emergencies.

Plus, it would ban the use of subliminal techniques that use AI to influence people to act, form an opinion or make a decision that they wouldn’t have done otherwise.

Why are these new rules being proposed?

The EU acknowledges that AI is a major part of the digital transformation of the world and is crucial to economic growth. It says it wants a set of rules that builds trust in AI and manage its potential impact on individuals, society, and the economy.

‘On artificial intelligence, trust is a must, not a nice to have,’ Margrethe Vestager, the European Commission executive vice president who oversees digital policy for the 27-nation bloc, said in a statement. ‘With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.’

The measures are the latest attempt by the EU to set global standards as a watchdog of the technology industry.

How extensive is the scope of the proposed regulation?

  • Companies could face fines of 6% of revenue if they don’t comply with bans or data requirements.
  • There would be smaller fines for companies that don’t comply with other strict requirements.
  • The law covers developers and users of high-risk AI systems.
  • Providers of what the EU calls high-risk AI, must subject it to a conformity assessment before deployment.
  • High standards would be called for the keeping of logs, datasets and ensuring traceability of results, with human monitoring to minimize risk.
  • The criteria for ‘high-risk’ applications includes intended purpose, the number of potentially affected people, and the irreversibility of harm. (The full list is given below.)
  • National market surveillance authorities will enforce the new rules.
  • A European board of regulators would be set up to ensure harmonised enforcement of the rules across Europe.

What does BCS say about the proposals?

Dr Bill Mitchell OBE, Director of Policy at BCS said: ‘The EU has realised that AI can be of huge benefit or huge harm to society and has decided to regulate on standards for the design, development, and adoption of AI systems to ensure we get the very best out of them.

‘There will be a huge amount of work to do to professionalise large sections of the economy ready for this sweeping legislation.

‘These ambitious plans to make AI work for the good of society will be impossible to deliver without a fully professionalised AI industry. Those with responsibility for adopting and managing AI will need to ensure their systems comply with these new regulations, as well as those designing and developing these systems.

‘The IT profession - and particularly those involved in AI - will in the future need to evidence they have behaved ethically, competently and transparently. In principle, this is something we should all welcome, and it will help restore public trust in AI systems that are used to make high stakes decisions about people.’

Read our full press release

What are its implications outside the EU?

The proposals set Europe on a different path to the US and China, directly prohibiting the use of AI for indiscriminate surveillance and most social scoring. But, much like the introduction of the General Data Protection Regulation, companies around the world may be forced to overhaul their operations if they want to continue to sell to Europe’s consumers or businesses.

However, the rules are likely to take some years to become law, but BCS advises there is a need to prepare now, especially in the area of staff development and training.

The following is the full list of AI systems the new regulations specify as high-risk - that is - the EU deems to present a clear safety risk or could impinge on EU fundamental rights, such as the right to non-discrimination.