The European Commission has emphasised the importance of adopting Artificial Intelligence (AI) systems with a human-centric approach to ensure their safe deployment. This human-centric approach requires implementing AI systems safely and reliably to benefit humanity, with the aim of protecting human rights and dignity by keeping a ‘human-in-the-loop’. Specifically, The EU AI Act proposal would require AI designers to allow human control or interference with an AI system to achieve effective human oversight. Under Article 14 (1), systems must be designed and developed in such a way that they can be ‘effectively overseen by natural persons during the period in which the AI system is in use’. This obligation implies that the purpose of Article 14 (1) is to compel high-risk AI designers to integrate, in their products, a human control function as part of a safeguard against malfunctions of AI. However, an unforeseen consequence is that Article 14 (1) could create a legal loophole to justify the shifting of responsibilities and accountabilities from one party (users of AI systems working as human overseers) to another (designers of AI systems). This ambiguity could create legal challenges as human overseers could argue that Article 14 is not intended to regulate them.
Further, Article 14 of the EU AI Act Proposal provides little detail on the human overseers’ responsibilities. Despite the emphasis that legislators have placed on human oversight as a mechanism to mitigate the risks of harmful algorithms, the functional quality of these policies have not been thoroughly interrogated. Strikingly, there is no clear guidance about the standard of meaningful human oversight under EU policy. In theory, adopting algorithms while ensuring human oversight could enable governments to achieve the accuracy, objectivity, and consistency of algorithmic decision-making paired with the individualized and contextual discretion of human decision-making. Article 14(2) of the AI Act establishes that the aim of human oversight is to prevent or minimise the risks of high-risk AI systems infringing on fundamental rights. This lack of clear guidelines on the responsibility of human overseers or what constitutes meaningful human oversight under the proposal arguably undermines a human-centric approach.
For instance, Article 14 does not consider at what stage a person affected by a high-risk AI system would have the right to request assistance from the human-in-the-loop. Would human oversight start at the own initiative of the human overseer, or at the request of the person being affected by the AI system? Further, it is unclear as to when a human-in-the-loop is required to protect citizens from different risks of AI systems. Recital 48 of the AI Act proposal supports a certain burden of responsibility on the human-in-the-loop by requiring them to have the necessary “competence, training and authority to carry out the role”, yet it leaves out the role of the individual to interpret or interrupt an AI system when required to do so.
In this sense, introducing human oversight to provide safeguards against the negative effects of AI systems, without properly measuring how effective these measures would be, is comparable to enacting legislation that merely provides a method for “rubber stamping” compliance without having certainty as to its effectiveness. The way human oversight is currently drafted under Article 14 does not provide appropriate safeguards to prompt human overseers to act in preventing or remedying the possible harms caused by AI systems. If human intervention is a key element to safeguard people’s rights, then the proposal could perhaps take a more active approach to explicitly define the responsibilities of the human-in-the-loop.
Please keep in mind that this form is only for feedback and suggestions for improvement.