By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Key Issues

Risk-Based Approach

The EU AI Act introduces a proportionate risk-based approach to AI regulation, which imposes a gradual scheme of requirements and obligations depending on the level of risk posed to health, safety and fundamental rights. With its horizontal legislative philosophy, the Act focuses on the intended purposes of AI systems and applies to all sectors and industries. In line with this risk-based attitude, the final text of the Act has shifted from the binary low-risk vs. high-risk distinction proposed in the Commission’s White Paper on AI to a four-tiered risk framework, which classifies risk into four categories:

  • ‘unacceptable risks’ that lead to prohibited practices;
  • ‘high risks’ that trigger a set of detailed, complex and stringent obligations
  • ‘limited risks’ with associated transparency obligations;
  • ‘minimal risks’, where stakeholders are encouraged to voluntarily build codes of conduct—irrespective of whether they are established in the EU or a third-country.

Based on this risk classification scheme, certain AI practices are considered to entail unacceptable risks and directly prohibited in Article 5. The exhaustive list of these prohibited AI practices includes use cases such as for social scoring and cognitive behavioral manipulation that may cause harm. Hefty penalties are prescribed for carrying out these banned AI activities.

The EU AI Act’s application scope is mainly targeted at high-risk AI systems (“HRAIS”), which are not prohibited but strictly regulated under the Act, with the relevant obligations largely imposed on HRAIS providers. The Act gives two main criteria for being considered HRAIS, one for being embedded in certain sensitive products such as machinery or medical devices, and the other for being employed in certain use cases exhaustively listed.

In this respect, the providers of these HRAIS have to observe strict obligations spanning throughout the entire lifecycle of HRAIS from development to post-marketing stage. Among other detailed and complex requirements, these obligations include conducting conformity assessments, ensuring AI transparency, robustness and accuracy, and establishing post-market monitoring systems. On the other hand, since AI-related risks may also arise along the marketing chain of AI systems, other market operators, e.g. deployers and distributors of HRAIS, are respectively required to follow certain obligations in carrying out their market activities. In this way, the EU AI Act intends to effectively and comprehensively address the AI-associated risks that merit regulatory supervision.

However, the Act is not only regulating AI systems. In order to give full effect to its risk-based philosophy, a separate main chapter is also devoted to general-purpose AI (“GPAI”) models in the final text after intensive legislative discussions. Indeed, AI models are the building blocks of AI systems, which are transformed into operable AI systems with the integration of different components. Thus, AI-associated risks may also be embedded in these models. In particular, well-trained, versatile, and highly capable AI models, i.e. general-purpose AI (“GPAI”) models, may be employed in a diverse array of AI applications and trigger unregulated risks. Thus, in order to tackle these risks, the EU AI Act introduces stringent requirements for the marketing of these GPAI models as well. The providers of GPAI models are mandated to comply with a separate set of obligations other than the ones imposed for HRAIS providers.

Moreover, the Act designates a specific subset of GPAI models, i.e. GPAI models with systemic risks (“GPAISR”). GPAISR are GPAI models demonstrating particularly high-impact capabilities due to their technical strength or reach in the market. In the first place, the EU AI Act introduces benchmarks and technical tools to determine such high-impact models. Secondly, the Act imposes further measures and strict obligations for the providers of GPAISR in addition to the requirements for GPAI models.

Transparency is vital for determining, assessing, managing and mitigating risks related to AI technologies. In this regard, the EU AI Act allocates respective transparency obligations tailored to HRAIS and GPAI models. However, the Act also devises a different and general transparency regime for a wider ground of inclusion, which refers to the category of AI systems with limited risks. Notably, AI systems with limited risks are not an exclusive group of AI systems and may actually involve any AI system, including HRAIS or minimal-risk AI systems. Thus, if an AI system is marketed in a way that corresponds to the general transparency conditions and requirements of the Act, it will be subject to these separately established transparency rules. These cases include, for instance, where a non-high-risk AI system is designed to interact with human beings. In this scenario, the provider is obliged to develop the AI system in such a way that human beings will be informed of their exposure to the AI system.

From the international perspective, there is a broad global consensus in support of a risk-based approach to AI regulation. However, concerns have been raised that some applications under the EU AI Act could fall through the cracks in the classification of risk levels. Of particular concern is the fact that the criteria for establishing whether or not an AI system is to be considered as posing an unacceptable risk is unclear in the Act. For example, the prohibition of AI systems that manipulate people through subliminal techniques appears intuitive, but in practice, it is unclear how harm is to be understood and which applications may actually be subject to prohibition. The threshold for manipulation as well as harm, therefore, may have to be clarified through future practical guidelines and interpretive tools, possibly also with the support from the preexisting European regulatory inventory around personal data and consumer protection.

A similar set of considerations apply to the scope and definition of HRAIS under the EU AI Act. The list and criteria adopted in the final published text have been subject to lively debate, especially given that the list –originally described by the Commission as potentially referring to a small subset of future AI systems on the market, around 5 to 10 % – has considerably expanded. Equivocally, the EU AI Act does not clarify how the different parts of AI systems should be treated, especially when these parts are supplied by different providers/manufacturers or not marketed independently. Further clarification in practice may be needed on whether each AI component must individually meet the legislative requirements and who will be responsible for their compliance.

Importantly, the EU AI Act empowers the European Commission to amend the list of HRAIS given in Annex III. However, additions or omissions by the Commission to this list can only be made within the eight main high-risk areas pre-defined by the Act. It is only through a new legislative act by the European Parliament and the Council that new areas for high-risk status can be introduced. However, it is crucial to leave a flexible room for adjustments where AI is developed and deployed rapidly across an increasing number of sectors and use cases, as unknown and unanticipated risks may quickly and inevitably arise. Against this backdrop, the actual resilience of the EU AI Act’s structure will be tested through its implementation over time.

On another note, most of the transparency measures envisioned for HRAIS – notably in relation to their registration in the EU-wide HRAIS database – apply to the developers of these systems, not to the actors deploying them. Stakeholders have argued that there should be greater transparency in relation to deployers. Indeed, a 2022 Mozilla report argues that ‘deployers must therefore be obligated to disclose AI systems they use for high-risk use cases and provide meaningful information on the exact purpose for which these high—risk AI systems are used as well as the context of deployment’. In practice, the Act should be able to strike a balance in the distribution of responsibilities among different operators, as it is indispensable for an effective and competitive AI market in the EU.

Perhaps the most significant gap relates to the fact that the AI Act does not consider the risks associated with the interaction between multiple AI systems. For example, several AI systems with individually limited or minimal risk profiles could end up interacting and generating significant risks for individuals and society as a whole. These so-called interactive risks of AI are for now excluded from the scope of the Act and might be addressed by the proposed EU legislative initiative on AI Liability. Basically a product legislation for the marketing of AI systems, the EU AI Act is focused entirely on a linear risk-based approach, with an isolated discussion of the role played by individuals that use AI systems.

Regardless of the possible legislative defects aired, the EU AI Act heralds a new era for the regulation of AI technologies. With its unique, comprehensive and detailed set of rules, the Act is bound to have a profound impact in the European AI market and possibly beyond. The effective enforcement of the Act will not take place immediately, and to what extent its risk-based approach is effectively employed will only become apparent in time.

Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.