A key issue concerns the Commission’s proposed Conformity Assessment requirements for different types of high-risk systems. As it currently stands, Member States and national competent authorities (i.e., government agencies in charge of oversight, implementation, and enforcement of the regulation) will designate third-party “Notified Bodies” to conduct conformity assessments for AI systems using a product safety component, such as machinery, medical devices and personal protective equipment. However, for stand-alone high-risk AI systems, only an industry self-assessment will be required.
In the case of the internal conformity assessment process, the provider, distributor, importer or other third-party must demonstrate conformity by an ‘assessment based on internal control’ i.e., self-certification (Article 43(1)). There are three major checks that such entities must carry out:
They can do this either by their own bespoke plans for conformity, or, much more likely, by following relevant harmonised technical standards.
After the responsible entity performs an internal conformity assessment, it must draw up a written EU declaration of conformity for each AI system (Article 19(1)). The provider or other entity must also affix a visible, legible, and indelible CE marking of conformity according to the conditions set in Article 49. This process must abide by the conditions set out in Article 30 of Regulation (EC) No 765/2008, notably that the CE marking shall be affixed only by the provider/other entity responsible for the conformity of the system.
This process risks divergent approaches and therefore creates the potential for varying levels of compliance. Accordingly, there is scepticism as to whether the extent to which non-binding self-assessment and self-certification without third-party notified bodies will be sufficient to protect consumers and their fundamental rights. In addition, this might present a burden for companies through hefty compliance costs and administrative hurdles for small and medium enterprises. Partly because of these concerns, the European Economic and Social Committee – a consultative body to the Commission, Council, and Parliament – has recommended making third-party assessments obligatory for all systems classified as high-risk.
As such, the mechanisms for risk management and conformity assessment in the AI Act will likely pave the way for the development of AI auditing standards. Specifically, for high-risk AI systems, there is a presumption of conformity when these AI systems comply with officially adopted harmonised standards developed by designated European Standards Organizations, namely CEN, CENELEC and ETSI. Therefore, standards would help harmonise a globally interoperable approach to assessing conformity – with no further self-assessment needed.
However, it is important to note that inconsistencies or different capacities across sectors and EU member states will likely hinder the development of mutual recognition agreements with other countries that will be needed to facilitate trade in AI. A key issue will be whether these technical standards will truly embody the substantive goals of the Act, especially regarding fundamental rights. Equally, an issue for non-EU states is that they are likely to have little input on these EU standards, but will find themselves subject to them when they export to the EU.
Please keep in mind that this form is only for feedback and suggestions for improvement.