By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
No items found.

High-Risk AI Systems Under the EU AI Act

Authored by
Published on
Jul 10, 2024
read time
0
min read
share this

The EU AI Act represents a pivotal regulatory framework designed to govern the development and deployment of artificial intelligence (AI) across Europe. Central to this legislation is classifying AI systems based on their potential risks, aiming to safeguard public safety and fundamental rights. This article explores the significance of identifying AI systems as high-risk, detailing the criteria outlined in the Act's Article 6 and Annex III. By clarifying these classifications, the article aims to inform stakeholders about their obligations, compliance measures, and the implications for AI innovation within the EU.

Overview of the EU AI Act

The EU AI Act sets out comprehensive rules to ensure the safe and ethical use of artificial intelligence within the European Union. It defines AI systems, outlines prohibited practices, and establishes criteria for classifying AI systems based on their risk levels. These regulations are essential for promoting trust and transparency in AI technologies while addressing potential risks to individuals and society.

General Provisions of the Act

The EU AI Act establishes a comprehensive regulatory framework to govern the development and deployment of artificial intelligence (AI) technologies across the European Union. It sets out clear rules and standards to ensure AI's safe and ethical use, balancing innovation with the protection of fundamental rights and public safety.

Objectives and Scope

At its core, the EU AI Act seeks to achieve several key objectives:

  • Promotion of Trust: Enhancing public and stakeholder trust in AI technologies through transparent and accountable practices.
  • Protection of Rights: Safeguarding fundamental rights, including privacy and non-discrimination, in designing and deploying AI systems.
  • Facilitation of Innovation: Fostering an environment that supports innovation while mitigating risks associated with AI technologies.

The scope of the Act covers a wide range of AI applications, from consumer products to industrial systems, ensuring that all sectors adhere to uniform standards and regulatory oversight.

Key Definitions and Terminologies

Central to understanding the EU AI Act are key definitions and terminologies that provide clarity on its application:

  • AI Systems: Defined broadly as software that can, to a limited extent, replicate human cognitive functions.
  • High-Risk AI: AI systems are categorized based on their potential to cause harm or impact fundamental rights, subjecting them to stricter regulatory scrutiny.
  • Third-Party Conformity Assessment: Mandatory evaluation process to ensure compliance with safety and ethical standards before AI systems are introduced tothe market.

Criteria for High-Risk Classification

Article 6 of the EU AI Act sets out criteria for classifying AI systems as high-risk, based on their role as safety components or products and the requirement for third-party conformity assessments, ensuring stringent oversight to protect public safety and fundamental rights.

Explanation of Article 6: Classification Rules for High-Risk AI Systems

Article 6 of the EU AI Act outlines the requirements for classifying AI systems as high-risk, ensuring stringent regulatory oversight to protect public safety and fundamental rights. The classification hinges on two primary conditions:

Condition 1: AI as a Safety Component or Product

  • AI systems intended as integral safety components of products or standalone products covered by Union harmonization legislation listed in Annex I are classified as high-risk.

 Condition 2: Requirement for Third-Party Conformity Assessment

  • Additionally, AI systems must undergo a third-party conformity assessment as mandated by Union harmonization legislation in Annex I to ensure they meet stringent safety and ethical standards before market introduction.

These conditions are pivotal in determining whether an AI system qualifies as high-risk under the EU AI Act, setting clear parameters for compliance and regulatory obligations.

Additional Criteria under Annex III

In addition to the primary conditions outlined in Article 6, Annex III of the EU AI Act provides supplementary criteria for identifying high-risk AI systems:

  • Specific Use-Cases: AI systems engaged in activities listed in Annex III that potentially pose significant risks to health, safety, or fundamental rights are classified as high-risk.
  • Risk Mitigation Factors: AI systems in Annex III may be exempt from high-risk classification if they do not pose significant risks, considering factors such as the nature of tasks performed, autonomy levels, and impact on decision-making.

These additional criteria under Annex III further refine the classification process, ensuring comprehensive coverage of AI applications while maintaining regulatory clarity and adherence to ethical standards.

Specific High-Risk AI Use Cases

Annex III of the EU AI Act lists high-risk AI applications, including those in healthcare, transportation, and critical infrastructure, due to their significant impact on public safety and fundamental rights/

Examples of AI Systems Listed in Annex III

Annex III of the EU AI Act enumerates various AI applications deemed high-risk due to their potential impact on public safety and fundamental rights. These include:

  • Healthcare Systems: AI used in medical diagnosis, treatment recommendations, and patient care monitoring.
  • Transportation Systems: Autonomous vehicles and AI-driven traffic management systems.
  • Critical Infrastructure: AI systems controlling energy distribution, water supply, and telecommunications networks.

Explanation of the Types of AI Systems Typically Considered High-Risk

High-risk AI systems span diverse sectors where their operations have significant implications. These include healthcare, transportation, and critical infrastructure, where AI's autonomy and decision-making capabilities are crucial yet potentially hazardous if not properly regulated. Such classifications ensure robust oversight and adherence to stringent safety and ethical standards.

Derogation from High-Risk Classification

AI systems in Annex III may be exempt from high-risk classification if they perform limited tasks, enhance human activities, detect patterns without influencing decisions, or assist in preparatory assessments.

Situations Where AI Systems Referred to in Annex III May Not Be Considered High-Risk (Article 6, Paragraph 3)

Article 6, Paragraph 3 of the EU AI Act provides conditions under which AI systems listed in Annex III may be exempted from high-risk classification. These include:

  • Narrow Procedural Tasks: AI systems performing limited, procedural tasks with minimal impact on outcomes.
  • Improving Results of Previously Completed Human Activities: AI enhancing existing human-performed tasks without altering final decisions.
  • Detecting Decision-Making Patterns Without Influencing Outcomes: AI analyzing patterns for informational purposes without direct influence on decisions.
  • Preparatory Tasks for Assessments: AI aiding in preparatory tasks for assessments or evaluations without direct decision-making authority.
Derogation from High-Risk Classification

Obligations for High-Risk AI Providers

Providers of high-risk AI systems must perform comprehensive risk assessments, document system details, and register with authorities, ensuring regulatory compliance, transparency, and mitigation of adverse impacts.

Obligations for High-Risk AI Providers

Documentation and Assessment Requirements

Providers of high-risk AI systems must conduct thorough risk assessments and document system functionalities, data processing methods, and potential risks to ensure compliance with regulatory standards and mitigate adverse impacts.

Registration Obligations as Outlined in Article 49(2)

Providers are required to register their high-risk AI systems with competent authorities before deployment, submitting detailed documentation and assessment reports to facilitate transparency and regulatory oversight.

Compliance with Guidelines and Provision of Necessary Documentation to Authorities

Providers must comply with EU AI Act guidelines, ensuring ethical and legal conformity of their AI systems, and promptly furnish required documentation to regulatory authorities for transparency and accountability.  

Amendments and Updates to High-Risk Classification

The European Commission updates high-risk AI classifications in Annex III, ensuring accurate regulatory oversight through criteria-based assessments and transparent delegated acts to respond to technological advancements and emerging risks.

Role of the European Commission in Updating Annex III (Article 7)

The European Commission updates Annex III by defining criteria for adding or modifying high-risk AI classifications based on potential risks to health, safety, or fundamental rights, and outlines procedures for removing AI systems when risks diminish.

Criteria for Adding or Modifying High-Risk Use Cases

The Commission evaluates AI system purpose, data processing capabilities, and potential impacts to classify systems accurately, ensuring effective regulatory oversight and alignment with technological advancements.

Criteria for Removing AI Systems from the High-Risk List

AI systems are removed from the high-risk list if they no longer pose significant risks, based on comprehensive impact assessments and regulatory criteria to balance safety with innovation.

Process for Adopting Delegated Acts

Delegated acts allow the Commission to update regulatory frameworks swiftly through transparent procedures, including public consultations and expert input, to respond to technological developments and emerging risks.

Transparency and Monitoring

Ensuring AI systems' safety and reliability, providers must disclose functionalities, decision-making processes, and risks, while post-market monitoring facilitates timely issue detection and mitigation.

Transparency and Monitoring

Transparency Obligations for Providers and Deployers (Chapter IV)

Providers must disclose AI system functionalities, decision-making processes, and risks to facilitate user trust and regulatory oversight, ensuring informed decision-making and compliance.

Post-Market Monitoring and Information Sharing (Chapter IX)

Post-market monitoring and information sharing enable timely detection and mitigation of high-risk AI system issues, fostering collaboration among stakeholders to enhance system safety and reliability.

Conclusion

In conclusion, the classification of AI systems as high-risk under the EU AI Act plays a crucial role in safeguarding public health, safety, and fundamental rights while promoting innovation. By defining clear criteria and obligations for AI developers and providers, the Act ensures that high-risk AI systems undergo rigorous assessment and comply with stringent regulatory standards. This classification framework not only enhances transparency and accountability but also fosters trust among users and stakeholders in the deployment of AI technologies.

Holistic AI Governance platform is an all-in-one solution, providing you with the tools you need to align your AI usage to the complex obligations under the EU AI Act with ease.

Schedule a call to explore how we can support your AI Governance journey and ensure responsible AI deployment.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.