By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Key Issues

Transparency Obligations

Under the EU AI Act, “transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights” (Recital 27).

In line with this description, a key priority of the EU AI Act is to establish a reliable and transparent regime for AI technologies, which is instrumental in enabling market operators as well as individuals to understand the AI systems’ design and use. This is also crucial in promoting the responsible development and use of AI, strengthening the accountability of relevant market actors for their AI operations.

Categorization of the transparency rules under the EU AI Act

The EU AI Act introduces different transparency obligations for the providers and deployers of AI systems. These rules can be understood in three dimensions, imposing different sets of obligations depending on the type of AI system or model in question.

1- Transparency requirements for high-risk AI systems (“HRAIS”)

2- Transparency obligations on the providers of general-purpose AI (“GPAI”) models

3- General transparency rules applicable to all relevant AI systems

Transparency with regard to HRAIS

As a risk-oriented legislation, the EU AI Act allocates the most detailed rules to AI systems considered high-risk. Likewise, separate transparency requirements are also stipulated for HRAIS under Article 13. Notably, it is the providers of HRAIS who are obliged to follow these rules with a view of ensuring transparency towards the deployers of HRAIS. In this respect, the providers of HRAIS are required to:

  1. design and develop their HRAIS in a way that ensures sufficient transparency for deployers to reasonably understand the system’s functioning and output accordingly
  2. provide ‘instructions for use’ to deployers along with HRAIS, which must give clear and complete information on the characteristics, functioning and other key features of HRAIS to deployers

As can be seen, there are two main aspects to the requirement of transparency within the context of HRAIS. The first aspect is strongly related to the technical infrastructure of HRAIS, which mandates the development of HRAIS to be transparent in their functioning so that deployers can understand how the system operates and produces outputs. Secondly, the technical assurance of transparency is supported with the additional obligation to provide guiding information to deployers on these HRAIS.[SA1]

Transparency obligations of the providers of GPAI models

GPAI models are highly capable and powerful AI models that can be adapted or tuned into diverse use cases of AI systems. Their complex features and capabilities may pose further challenges in understanding and monitoring their functioning. Thus, with a view of providing additional guardrails for transparency on these models, the Act mandates the providers of GPAI models to observe separate obligations. These obligations can be summarized as:

  1. Creating technical documentation for GPAI models, covering their training, testing, and evaluation processes
  2. Supplying information and documentation to AI system providers who seek to use the GPAI model in their products, helping them understand the model’s capabilities and limitations to meet their legal obligations
  3. Providing a detailed summary of the training content and data to enhance transparency

Transparency as required for all relevant AI systems

In Article 50, the EU AI Act devises a general transparency regime not exclusive to HRAIS or GPAI models but applicable to any AI system provided that it falls under one of the use cases listed. A brief overview of these certain types of AI systems is given below:

The Type of AI System Obligation Exception
AI Systems in Direct Interaction with Human Beings The provider must inform individuals about their interaction with an AI system if it is not readily apparent to the user. AI systems authorized by law for criminal offense prevention, detection, investigation, or prosecution (collectively “law enforcement”), provided they are not publicly available for reporting criminal offenses
AI Systems Generating Synthetic Content The provider must ensure that synthetic AI outputs are labeled in a machine-readable format and identifiable as artificially generated or manipulated.
  1. AI systems used for standard editing assistance
  2. AI systems that do not significantly alter the input data or its meaning
  3. AI systems authorized for law enforcement purposes
AI Systems for Emotion Recognition or Biometric Categorization The deployer must:
  • Inform individuals exposed to the system about its operation
  • process personal data in compliance with the relevant data protection laws (e.g., GDPR)
AI systems authorized for law enforcement purposes
AI Systems Generating Deep Fake The deployer must disclose to individuals that the content has been artificially produced or modified.
  1. Artistic or creative works where transparency does not hinder enjoyment
  2. AI systems authorized for law enforcement purposes
AI Systems Generating/ Manipulating Text of Public Interest The deployer must disclose that the text, which is published for the purpose of informing the public on matters of public interest, is artificially created or modified.
  1. AI-generated content that has been subject to human review or editorial control, with the editorial responsibility lying with a natural or legal person
  2. AI systems authorized for law enforcement purposes
All these information and disclosure requirements must be met no later than the initial interaction with or exposure to the AI system by individuals.

The interplay between different transparency-related provisions

The transparency requirements under the EU AI Act for a given AI system vary depending on the system’s risk level. The applicable rules must be identified on a case-by-case basis after a meticulous examination and taking the special circumstances into account. However, a concise outline of the identification procedure may be mapped out as follows:

The interplay between different transparency-related provisions

The EU database as a systematic transparency measure

The EU AI Act also includes a potentially powerful mechanism for ensuring systematic transparency: “The EU database” for HRAIS, which is to be set up and maintained by the Commission. This database is envisaged both to facilitate the works of the EU authorities and to enhance the transparency toward the public. In this respect, the providers of HRAIS are mandated to register their HRAIS to the EU database, which will host the relevant information and records of HRAIS and be available to the public in a user-friendly format. In this way, the EU database may also be conducive to a more “transparent” and inclusive transparency regime for AI systems.

Projection for the future: Clarifying the concept of transparency

It must be emphasized that the EU AI Act does not clearly address the actual level of transparency and understandability that will be required for AI systems. In other words, it is not yet clear in practice how and to what extent it will be sufficient to comply with the EU AI Act rules on transparency. The practical tools, templates, and particularly codes of practice are to be developed by the AI Office. However, the true meaning and interpretation of transparency will be shaped through the implementation of the Act in time.

Report error

Report error

Please keep in mind that this form is only for feedback and suggestions for improvement.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.