The European Commission’s (EC) proposed Artificial Intelligence (AI) regulation – a much-awaited piece of legislation – is out.

Copyright by Sébastien Louradour, Fellow, Artificial Intelligence and Machine Learning, World Economic Forum

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningWhile this text must still go through consultations within the EU before its adoption, the proposal already provides a good sense of how the EU considers the development of AI within the years to come: by following a risk-based approach to regulation.

Among the identified risks, remote biometric systems, which include Facial Recognition Technology (FRT), are a central concern of the drafted proposal:

  • AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons are considered a high-level risk system and would require an ex-ante evaluation of the technology provider to attest its compliance before getting access to the EU market, and an ex-post evaluation of the technology provider (detailed below).
  • In addition, “real-time” remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement are mostly prohibited unless they serve very limited exceptions related to public safety such as the targeted search of missing persons or the prevention of imminent terrorist threats (detailed in Chapter 2, Article 5, p.43-44). Additional requirements for this use-case would include an ex-ante evaluation to grant authorisation to law enforcement agencies, i.e. each individual use should be granted by a “judicial authority or by an independent administrative authority of the Member State”, unless it is operated in a “duly justified situation of urgency”. Finally, national laws could determine whether they fully or partially authorize the use of FRT for this specific use-case.

Other use-cases such as FRT for authentication processes are not part of the list of high-level risks and thus should require a lighter level of regulation.

Ex-ante and ex-post evaluation of technology providers

The ex-ante evaluation (conformity assessment of providers) would include:

  • A review of the compliance with the requirements of Chapter 2;
  • An assessment of the quality management system, which includes the risk management procedures, and the post-market monitoring system; and,
  • The assessment of the technical documentation of the designated AI system.

Certifying the quality of the processes rather than the algorithm performance

While technology providers have to maintain the highest level of performance and accuracy of their systems, this necessary step isn’t the most critical to prevent harm. The EC doesn’t detail any threshold of accuracy to meet, but rather requires a robust and documented risk-mitigation process designed to prevent harm. The deployment of a quality-management system is an important step as it will require providers to design adequate internal processes and procedures for the active mitigation of potential risks.

A focus on risk management and processes

While it will be up to the technology providers to set up their own quality processes, third-party notified bodies will have the responsibility of attesting providers’ compliance with the new EU legislation.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

To succeed, tech providers will need to build tailored approaches to design, implement and run these adequate processes. Providers will also have to work closely with the user of the system to anticipate potential risks and propose mitigation processes to prevent them.