The EU AI Act is a groundbreaking regulation designed to set harmonized rules for the development, market placement, and use of artificial intelligence across the European Union. Its primary goal is to ensure that AI systems are safe, transparent, and respect fundamental rights. Understanding prohibited AI practices is crucial for businesses and developers to comply with the Act and avoid significant penalties. This article will outline the key prohibited practices under the EU AI Act, helping stakeholders navigate this complex regulatory landscape effectively.
The development of the EU AI Act began with its initial proposal by the European Commission on April 21, 2021. The Act has since undergone significant consultation and refinement. Key milestones include the adoption of the General Approach by the Council of the EU in November 2022 and the European Parliament's approval of the Act in June 2023. These milestones mark critical amendments and consensus-building phases that have shaped the current version of the Act.
On 13th March 2024, the European Union's groundbreaking AI Act was approved by the European Parliament in a significant plenary session.
The EU AI Act is driven by three core objectives:
These objectives collectively aim to foster a safe, ethical, and transparent AI environment within the EU.
The EU AI Act establishes clear boundaries for responsible AI development. This section dives into specific AI practices deemed too risky and therefore prohibited within the EU.
The EU AI Act prohibits AI systems that use subliminal or manipulative techniques to distort behavior and decision-making. These practices, which operate beyond a person's consciousness, aim to impair an individual's ability to make informed decisions. Examples include AI-driven advertising that subtly influences purchasing decisions without the consumer's awareness.
AI systems that exploit the vulnerabilities of specific groups, such as those based on age, disability, or socio-economic status, are also banned. These systems manipulate behavior in a way that causes significant harm. For instance, AI tools targeting elderly individuals with deceptive health-related advertisements exploit their age-related vulnerabilities.
The Act prohibits AI systems that evaluate or classify individuals based on their social behavior, leading to social scoring. This practice can result in unjustified and disproportionate treatment, such as denying services based on a person's inferred behavior or characteristics in unrelated contexts.
Using AI to predict criminal offenses solely based on profiling or assessing personality traits is prohibited. However, exceptions are made for AI systems that support human assessment in criminal investigations, provided they rely on objective and verifiable facts.
The creation and expansion of facial recognition databases through untargeted scraping of images from the internet or CCTV footage are prohibited. This practice raises significant privacy and ethical concerns, as it involves collecting and using biometric data without individuals' consent.
AI systems used to infer emotions in workplaces and educational institutions are banned, with exceptions for medical or safety purposes. For example, emotion recognition technology to monitor student behavior in classrooms is not allowed unless it is for specific safety reasons.
AI systems that categorize individuals based on sensitive biometric attributes, such as race or sexual orientation, are prohibited. However, there are exceptions for law enforcement applications, provided they comply with stringent regulations to protect individuals' rights.
The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes is highly regulated. These systems are allowed only under strict conditions, such as preventing imminent threats or locating missing persons, and require prior authorization by judicial or administrative authorities.
The European Union's AI Act includes detailed provisions and conditions for the use of real-time biometric identification systems, aiming to balance security needs with individual privacy rights. Here are the key aspects:
Real-time biometric identification systems can only be used when strictly necessary and proportionate. This means the system's deployment must be essential for achieving a legitimate objective, such as preventing imminent threats. The impact on individuals' rights and freedoms must be minimal and justified by the situation's severity. For example, using biometric identification in public spaces must balance public safety needs with personal privacy concerns.
Before deploying real-time biometric identification, prior authorization from a judicial or independent administrative authority is required. This ensures the use is legally justified and necessary. In urgent situations, the system can be used without prior authorization, provided that the authorization request is submitted within 24 hours. An impact assessment must also be completed to evaluate the potential effects on fundamental rights.
All uses of real-time biometric identification systems must be reported to relevant authorities, including market surveillance and national data protection authorities. Additionally, annual reports detailing the use of these systems must be submitted to the European Commission. These reports should include data on the number of authorizations and the outcomes of their use, ensuring transparency and accountability.
Member States have the flexibility to impose more restrictive laws on the use of biometric identification systems. They can adopt additional measures that exceed the EU's minimum requirements to better protect individuals' rights. Once these national rules are established, Member States must notify the European Commission within 30 days, ensuring that the Commission is aware of all regulatory frameworks in place across the Union.
This section explains how the EU enforces its new AI Act. It outlines the tiered penalty system for non-compliance, the potential impact on businesses, and proactive strategies to avoid hefty fines and reputational damage.
The EU AI Act enforces a strict tiered penalty structure for non-compliance to ensure proportionality and effectiveness:
Non-compliance with the EU AI Act carries significant financial and reputational risks for businesses. Fines can substantially impact financial stability, while public awareness of violations can damage trust and brand reputation. Proactive compliance strategies are essential to mitigate these risks.
To ensure compliance and mitigate risks, businesses should:
Establish Risk Management Frameworks:
Conduct Regular Audits:
Maintain Thorough Documentation:
Engage in Continuous Monitoring:
Holistic AI can help you identify and classify your AI systems, preparing you for the requirements of the EU AI Act and tracking international developments in AI regulation.
Schedule a call to find out more learn how AI Governance can support your organization to minimize risks of penalties, fines, and incidents.
The EU AI Act identifies several prohibited practices, including subliminal and manipulative techniques, exploitation of vulnerabilities, social scoring systems, criminal risk assessment, facial recognition databases, emotion recognition in sensitive areas, biometric categorization, and real-time remote biometric identification for law enforcement.
The Act is crucial in regulating AI practices, ensuring the protection of fundamental rights, and fostering transparency and accountability in AI deployment.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.