The EU’s AI Act is pioneering AI regulation with a focus on high-risk systems, setting a global benchmark for compliance and safety—here’s the most important information you should know about it.

 

Copyright: htworld.co.uk – “The EU’s Artificial Intelligence (AI) Act: The First Of Its Kind”


 

SwissCognitive_Logo_RGBThe EU has introduced new legislation on AI, the EU AI Act, which lays the foundation for the regulation of, and responsible development of, AI across all industries within the EU.

The Act was published in the Official Journal of the EU on 13 July 2024 and is due to enter into force on 2 August 2024.

While it will be the first of its kind to come into effect globally, it seems Colorado in the US is not far behind, being the first US state to recently pass comprehensive legislation on the issue.

This article will look at what the EU AI Act says, how it categorises AI systems, what is prohibited under the Act and what is deemed high risk.

While the Act will be relevant to many industries, this article will briefly consider some of the implications for Medtech specifically and will also touch on how the Act compares with Colorado’s equivalent.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

What the EU AI Act says

How the Act categorises AI systems?

The Act classifies AI according to its risk:

  • Unacceptable risk: Unacceptable risk is prohibited (e.g. manipulative AI and social scoring systems);
  • High risk: Most of the Act addresses high risk AI systems, which are regulated;
  • Limited risk: A smaller section of the Act addresses limited risk AI systems, which will be subject to lighter transparency requirements (e.g. developers/deployers must ensure that end-users are aware they are interacting with AI); and
  • Minimal risk: Minimal risk is unregulated (includes various AI applications such as video games and spam filters)

What systems are prohibited?

According to the Act (quoting https://artificialintelligenceact.eu/high-level-summary/), the following types of Artificial Intelligence systems are prohibited, those:

  • deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm.
  • exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.
  • biometric categorisation systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorises biometric data.
  • social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people.[…]

Read more: www.htworld.co.uk