The GDPR governs the use of personal data within the European Union (EU). However, it has extra territorial scope, meaning that companies outside of the EU must comply with GDPR obligations when processing personal data about individuals in the EU. The same can be said for the EU AI Act, which will also have implications for companies around the world. While the two regulations focus on different entities, with the GDPR describing obligations for data controllers and data processors and the EU AI Act focuses on providers and users of AI systems, organisations will need to map the two concepts carefully to identify which parties are subject to AI Act or GDPR requirements, or both. This is particularly true since the two regimes have some overlaps, interacting most obviously with respect to (1) bias and discrimination, (2) risk assessments and (3) solely automated decision-making.
Article 9 of GDPR prohibits data controllers from processing special category data unless an exception applies. Special category data includes “personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation”. However, there has been legal debate around the word “revealing”, which suggests that special category data is broader than just information about a person’s racial or ethnic origin. The European Court of Justice (CJEU) appears to agree; in August 2022, the Court held that if an organisation can infer or deduce special category data, the information supporting that inference should also be treated as special category data (Case C-184/20). In a machine learning context, the CJEU decision means that proxy variables could be considered special category data under the GDPR.
Contrastingly the EU AI Act provides an explicit exemption from the Article 9 GDPR prohibition. Article 10(5) states that “to the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) [GDPR]”. Article 10(5) goes on to require “appropriate” safeguards. Thus, the two regulations could result in entities having conflicting requirements.
Further, Article 35 of GDPR requires data controllers to carry out data protection impact assessments (DPIAs) where processing is likely to result in a high risk to the rights and freedoms of natural persons. The opinion explains that providers (as defined in the AI Act) will not always be able to assess all possible uses of a system. A provider’s initial assessment for the purposes of determining whether the system is high-risk according to the AI Act does not exclude a subsequent DPIA by the user, even if the provider’s assessment concludes that the system is not high risk. Therefore, the same system could potentially be subject to different risk management requirements and classifications under each law.
Finally, Article 22(1) GDPR states that individuals have the right to not be subject to a decision based solely on automated processing which produces legal or similarly significant effects. Article 14 AI Act creates a related requirement for human oversight of high-risk systems. Thus, under the AI Act, it seems possible, in principle, for an AI system classed as low or minimal risk to make a solely automated decision within the meaning of the GDPR. Building on this, the European Court of Justice is considering whether creating a credit score is a decision in and of itself. The outcome will likely impact other AI systems used to score, rank or assess individuals.
Please keep in mind that this form is only for feedback and suggestions for improvement.