The European Commission’s (EC) proposed Artificial Intelligence (AI) regulation – a much-awaited piece of legislation – is out.
Copyright by Sébastien Louradour, Fellow, Artificial Intelligence and Machine Learning, World Economic Forum
While this text must still go through consultations within the EU before its adoption, the proposal already provides a good sense of how the EU considers the development of AI within the years to come: by following a risk-based approach to regulation.
Among the identified risks, remote biometric systems, which include Facial Recognition Technology (FRT), are a central concern of the drafted proposal:
Other use-cases such as FRT for authentication processes are not part of the list of high-level risks and thus should require a lighter level of regulation.

Ex-ante and ex-post evaluation of technology providers
The ex-ante evaluation (conformity assessment of providers) would include:
- A review of the compliance with the requirements of Chapter 2;
- An assessment of the quality management system, which includes the risk management procedures, and the post-market monitoring system; and,
- The assessment of the technical documentation of the designated AI system.
Certifying the quality of the processes rather than the algorithm performance
While technology providers have to maintain the highest level of performance and accuracy of their systems, this necessary step isn’t the most critical to prevent harm. The EC doesn’t detail any threshold of accuracy to meet, but rather requires a robust and documented risk-mitigation process designed to prevent harm. The deployment of a quality-management system is an important step as it will require providers to design adequate internal processes and procedures for the active mitigation of potential risks.
A focus on risk management and processes
While it will be up to the technology providers to set up their own quality processes, third-party notified bodies will have the responsibility of attesting providers’ compliance with the new EU legislation.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
To succeed, tech providers will need to build tailored approaches to design, implement and run these adequate processes. Providers will also have to work closely with the user of the system to anticipate potential risks and propose mitigation processes to prevent them.
The European Commission’s (EC) proposed Artificial Intelligence (AI) regulation – a much-awaited piece of legislation – is out.
Copyright by Sébastien Louradour, Fellow, Artificial Intelligence and Machine Learning, World Economic Forum
Among the identified risks, remote biometric systems, which include Facial Recognition Technology (FRT), are a central concern of the drafted proposal:
Other use-cases such as FRT for authentication processes are not part of the list of high-level risks and thus should require a lighter level of regulation.
Ex-ante and ex-post evaluation of technology providers
The ex-ante evaluation (conformity assessment of providers) would include:
Certifying the quality of the processes rather than the algorithm performance
While technology providers have to maintain the highest level of performance and accuracy of their systems, this necessary step isn’t the most critical to prevent harm. The EC doesn’t detail any threshold of accuracy to meet, but rather requires a robust and documented risk-mitigation process designed to prevent harm. The deployment of a quality-management system is an important step as it will require providers to design adequate internal processes and procedures for the active mitigation of potential risks.
A focus on risk management and processes
While it will be up to the technology providers to set up their own quality processes, third-party notified bodies will have the responsibility of attesting providers’ compliance with the new EU legislation.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
To succeed, tech providers will need to build tailored approaches to design, implement and run these adequate processes. Providers will also have to work closely with the user of the system to anticipate potential risks and propose mitigation processes to prevent them.
Share this: