AI Act: regulatory revolution or chimera?
The AI Act, the European Union’s first attempt to organically regulate artificial intelligence, promises to reconcile technological innovation and the protection of fundamental rights. However, beyond declarations of principle, the legislation raises more than one doubt, leaving room for criticism on both a conceptual and practical level.
The two souls of the AI Act
The core of the AI Act is based on a delicate balance between two objectives: promoting a competitive internal market and ensuring the protection of citizens from the risks associated with the use of AI technologies. This dualism emerges clearly from the regulatory structure, which oscillates between the desire to harmonise the European market and the ambition, more political than concrete, to place the human being at the centre of technological innovation.
A new anthropocentrism: rhetoric or reality?
The AI Act is based on a principle that is as noble as it is ambitious: putting humans back at the centre of the relationship with technology. This avowed ‘anthropocentrism’ does not merely evoke human control over machines, but presents itself as a new ethical vision for Europe’s digital future. It is a statement of priorities that aims to transform technology from a mere tool to a safe and transparent space for human interaction. However, the gap between theory and practice becomes apparent when analysing the regulatory provisions. However much it is declaimed, the anthropocentrism of the AI Act seems to be diluted in a series of compliance obligations that often replicate existing instruments without adding substance.
Transparency: promise or formality?
One of the most emphasised points of the AI Act is transparency, a concept that the legislation articulates in requirements such as clear documentation on AI systems and the obligation to inform people when they interact with an artificial intelligence. However, the effectiveness of these provisions depends on their actual comprehensibility for end users. Translating the logic of a complex algorithm into accessible terms is a challenge that goes far beyond the legal sphere, touching on technical and linguistic issues. The real risk is that the promised transparency is reduced to a formality: documents drawn up to fulfil a regulatory obligation, but useless for those who should really understand them.
Is human supervision really feasible?
Human supervision, another pillar of the AI Act, is presented as a reassuring response to the fear of uncontrolled automation. However, critical issues also emerge here. The requirement to ensure that operators can monitor and intervene in high-risk AI systems appears difficult to implement in contexts where technological complexity is high. Moreover, the assumption that human supervision can always guarantee effective control ignores the reality of many sectors where dependence on AI is not easily reducible and the quality of AI results constitutes a technical standard from which it will be risky to deviate without exposing oneself to greater civil or criminal liability.
The contestability of algorithmic decisions
One of the most controversial points of the AI Act concerns the contestability of decisions made by AI systems. Unlike the GDPR, which guarantees clear jurisdictional tools for data subjects, the AI Act does not offer explicit remedies. The Article 85 complaint, for instance, is limited to a conformity check and has little of the instrument of fundamental right protection. Moreover, anyone wishing to challenge an algorithmic decision or those of the AI supervisory authorities would have to rely on already existing tools, such as administrative appeals, the protections offered by the Defective Products Directive or the GDPR. This gap makes the protection system uneven and, in many cases, ineffective.
Regulatory ambiguities and the risk of a missed opportunity
Comparison with case law also highlights the weaknesses of the AI Act. Judgments such as that of the Italian Consiglio di Stato (2270/2019) and the ‘Ligue des droits humains’ case of the European Court of Justice have already addressed the problem of algorithmic transparency, highlighting the need to ensure not only the knowability of the rules, but also the possibility of effective control by judges. However, the AI Act does not seem to fully take up these indications, limiting itself to a bureaucratic vision that risks leaving the most complex nodes unresolved.
Conclusions: between utopia and reality
The real limitation of the AI Act, however, is in its vision. The legislation seems to focus more on imposing compliance obligations than on promoting a truly innovative and competitive ecosystem. The high level of compliance required will not translate into a real incentive for companies, but will rather be an additional bureaucratic burden that could penalise European realities compared to those outside the EU. In a global market dominated by technology giants, the AI Act risks putting European companies at a competitive disadvantage, without offering meaningful protections to citizens.
The EU’s intention to regulate artificial intelligence is, in conclusion, unnecessary and self-defeating. With the exception of the ban on certain unacceptably risky systems, the AI Act adds no further substantive protection for fundamental rights and engulfs a market that is already struggling to get off the ground with rules that are difficult to implement in practice.