By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
No items found.

Prohibited AI Practices Under the EU AI Act

Authored by
Published on
Jul 9, 2024
read time
0
min read
share this

The EU AI Act is a groundbreaking regulation designed to set harmonized rules for the development, market placement, and use of artificial intelligence across the European Union. Its primary goal is to ensure that AI systems are safe, transparent, and respect fundamental rights. Understanding prohibited AI practices is crucial for businesses and developers to comply with the Act and avoid significant penalties. This article will outline the key prohibited practices under the EU AI Act, helping stakeholders navigate this complex regulatory landscape effectively.

History and Development of the EU AI Act

The development of the EU AI Act began with its initial proposal by the European Commission on April 21, 2021. The Act has since undergone significant consultation and refinement. Key milestones include the adoption of the General Approach by the Council of the EU in November 2022 and the European Parliament's approval of the Act in June 2023. These milestones mark critical amendments and consensus-building phases that have shaped the current version of the Act.

On 13th March 2024, the European Union's groundbreaking AI Act was approved by the European Parliament in a significant plenary session.

Core Objectives and Principles

The EU AI Act is driven by three core objectives:

  • Protection of Fundamental Rights: The Act aims to ensure that AI systems do not infringe on the fundamental rights of individuals, such as privacy, non-discrimination, and safety.
  • Prevention of Harmful AI Practices: By prohibiting certain AI applications, the Act seeks to prevent practices that could cause significant harm to individuals or groups.
  • Ensuring Transparency and Accountability: The Act mandates transparency in AI operations and assigns accountability to providers and deployers to maintain trust and oversight.

These objectives collectively aim to foster a safe, ethical, and transparent AI environment within the EU.

Prohibited AI Practices under the EU AI Act

The EU AI Act establishes clear boundaries for responsible AI development. This section dives into specific AI practices deemed too risky and therefore prohibited within the EU.

Subliminal and Manipulative Techniques

The EU AI Act prohibits AI systems that use subliminal or manipulative techniques to distort behavior and decision-making. These practices, which operate beyond a person's consciousness, aim to impair an individual's ability to make informed decisions. Examples include AI-driven advertising that subtly influences purchasing decisions without the consumer's awareness.

Exploitation of Vulnerabilities

AI systems that exploit the vulnerabilities of specific groups, such as those based on age, disability, or socio-economic status, are also banned. These systems manipulate behavior in a way that causes significant harm. For instance, AI tools targeting elderly individuals with deceptive health-related advertisements exploit their age-related vulnerabilities.

Social Scoring Systems

The Act prohibits AI systems that evaluate or classify individuals based on their social behavior, leading to social scoring. This practice can result in unjustified and disproportionate treatment, such as denying services based on a person's inferred behavior or characteristics in unrelated contexts.

Criminal Risk Assessment

Using AI to predict criminal offenses solely based on profiling or assessing personality traits is prohibited. However, exceptions are made for AI systems that support human assessment in criminal investigations, provided they rely on objective and verifiable facts.

Facial Recognition Databases

The creation and expansion of facial recognition databases through untargeted scraping of images from the internet or CCTV footage are prohibited. This practice raises significant privacy and ethical concerns, as it involves collecting and using biometric data without individuals' consent.

Emotion Recognition in Sensitive Areas

AI systems used to infer emotions in workplaces and educational institutions are banned, with exceptions for medical or safety purposes. For example, emotion recognition technology to monitor student behavior in classrooms is not allowed unless it is for specific safety reasons.

Biometric Categorization

AI systems that categorize individuals based on sensitive biometric attributes, such as race or sexual orientation, are prohibited. However, there are exceptions for law enforcement applications, provided they comply with stringent regulations to protect individuals' rights.

Real-Time Remote Biometric Identification for Law Enforcement

The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes is highly regulated. These systems are allowed only under strict conditions, such as preventing imminent threats or locating missing persons, and require prior authorization by judicial or administrative authorities.

Detailed Provisions and Conditions for Real-Time Biometric Identification

The European Union's AI Act includes detailed provisions and conditions for the use of real-time biometric identification systems, aiming to balance security needs with individual privacy rights. Here are the key aspects:

Strict Necessity and Proportionality

Real-time biometric identification systems can only be used when strictly necessary and proportionate. This means the system's deployment must be essential for achieving a legitimate objective, such as preventing imminent threats. The impact on individuals' rights and freedoms must be minimal and justified by the situation's severity. For example, using biometric identification in public spaces must balance public safety needs with personal privacy concerns.

Authorization and Impact Assessment

Before deploying real-time biometric identification, prior authorization from a judicial or independent administrative authority is required. This ensures the use is legally justified and necessary. In urgent situations, the system can be used without prior authorization, provided that the authorization request is submitted within 24 hours. An impact assessment must also be completed to evaluate the potential effects on fundamental rights.

Notifications and Reporting

All uses of real-time biometric identification systems must be reported to relevant authorities, including market surveillance and national data protection authorities. Additionally, annual reports detailing the use of these systems must be submitted to the European Commission. These reports should include data on the number of authorizations and the outcomes of their use, ensuring transparency and accountability.

Member State Regulations

Member States have the flexibility to impose more restrictive laws on the use of biometric identification systems. They can adopt additional measures that exceed the EU's minimum requirements to better protect individuals' rights. Once these national rules are established, Member States must notify the European Commission within 30 days, ensuring that the Commission is aware of all regulatory frameworks in place across the Union.

Compliance and Enforcement

This section explains how the EU enforces its new AI Act. It outlines the tiered penalty system for non-compliance, the potential impact on businesses, and proactive strategies to avoid hefty fines and reputational damage.

Penalties for Non-Compliance

The EU AI Act enforces a strict tiered penalty structure for non-compliance to ensure proportionality and effectiveness:

  • Severe Violations: Offenses such as using prohibited AI systems can lead to fines of up to €35 million or 7% of global annual turnover, whichever is higher.
  • Moderate Violations: Failure to comply with specific obligations can incur fines up to €15 million or 3% of global annual turnover.
  • Minor Violations: Providing incorrect or misleading information to authorities can result in fines up to €7.5 million or 1% of global annual turnover.
Penalties for Non-Compliance in EU AI Act

Impact on Businesses

Non-compliance with the EU AI Act carries significant financial and reputational risks for businesses. Fines can substantially impact financial stability, while public awareness of violations can damage trust and brand reputation. Proactive compliance strategies are essential to mitigate these risks.

Proactive Compliance Strategies

To ensure compliance and mitigate risks, businesses should:

Establish Risk Management Frameworks:

  • Identify and assess potential risks associated with AI systems.
  • Implement measures to mitigate identified risks.

Conduct Regular Audits:

  • Perform internal and external audits to ensure ongoing compliance.
  • Document audit findings and implement necessary changes.

Maintain Thorough Documentation:

  • Keep detailed records of AI system development and deployment processes.
  • Ensure documentation is up-to-date and accessible for review.

Engage in Continuous Monitoring:

  • Monitor AI systems for compliance with regulatory requirements.
  • Adapt strategies based on legislative updates and industry best practices.

Final Recommendations

  • Stay Informed and Compliant: It is vital for businesses to stay updated on the latest developments and requirements of the EU AI Act. Regular monitoring of legislative updates and engagement with regulatory bodies will help ensure ongoing compliance.
  • Adopt Responsible AI Practices: Companies should proactively adopt responsible AI practices by implementing robust risk management frameworks, conducting regular audits, and maintaining thorough documentation. This approach will help mitigate risks, avoid penalties, and build trust in AI systems.

Holistic AI can help you identify and classify your AI systems, preparing you for the requirements of the EU AI Act and tracking international developments in AI regulation.

Schedule a call to find out more learn how AI Governance can support your organization to minimize risks of penalties, fines, and incidents.

Conclusion

The EU AI Act identifies several prohibited practices, including subliminal and manipulative techniques, exploitation of vulnerabilities, social scoring systems, criminal risk assessment, facial recognition databases, emotion recognition in sensitive areas, biometric categorization, and real-time remote biometric identification for law enforcement.

The Act is crucial in regulating AI practices, ensuring the protection of fundamental rights, and fostering transparency and accountability in AI deployment.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.