In light of the rise of generative artificial intelligence (AI) and recent debates about the socio-political implications of large-language models and chatbots, Manuel Wörsdörfer analyzes the strengths and weaknesses of the European Union’s Artificial Intelligence Act (AIA), the world’s first comprehensive attempt by a government body to address and mitigate the potential negative impacts of AI technologies. He recommends areas where the AIA could be improved.


The rise of generative AI—including chatbots such as ChatGPT and the underlying large-language models—has sparked debates about its potential socio-economic and political implications. Critics point out that these technologies have the potential to exacerbate the spread of disinformation, deception, fraud, and manipulation; amplify risks of biases and discrimination; trigger hate crimes; exacerbate government surveillance; and further undermine trust in democracy and the rule of law.

As these and similar discussions show, there appears to be a growing need for AI regulation, and several governments have already begun taking initial steps. For instance, the Competition and Markets Authority in the United Kingdom has launched an investigation examining “AI foundation models,” and the White House has released a “Blueprint for an AI Bill of Rights” and a statement on “Ensuring Safe, Secure, and Trustworthy AI Principles.” But the European Commission has made the boldest move so far with its Artificial Intelligence Act (AIA) proposal. The AIA went through extensive deliberations and negotiations in the Council of the European Union and European Parliament in 2022, and its revised version received parliamentary approval in the early summer of 2023.

The Artificial Intelligence Act

The AIA’s primary goal is to create a legal framework for secure, trustworthy, and ethical AI. It aims to ensure that AI technologies are human-centric, safe to use, compliant with pre-existing laws, and respectful of fundamental rights, especially those enshrined in the EU’s Charter of Fundamental Rights. It is also part of the EU’s digital single market strategy, which seeks to update the EU’s integrated market by tackling new obstacles raised by the digital economy. The AIA complements the General Data Protection Regulation and is consistent with the Digital Services Act, the Digital Markets Act, and other regulatory initiatives designed to address the societal and anticompetitive concerns of the digital economy. 

The AIA assigns regulatory restrictions on different AI technologies by sorting them into risk categories: prohibited risk, high risk, and limited or minimal risk. The higher the risk, the more regulations apply to those technologies. The AIA denies market access whenever the risks are deemed too high for risk-mitigating interventions. For example, the AIA bans public authorities from using AI to issue social scores to citizens or use real-time biometric identification systems to monitor people in public spaces.

For high-risk AI systems—e.g., tools used to monitor and operate critical infrastructure—market access is granted if they comply with the AIA. This includes ex-ante technical requirements, such as risk management systems and certification, and an ex-post market monitoring procedure. Minimal-risk systems must fulfill general safety requirements, such as those in the General Product Safety Directive.

In short, the AIA defines prohibited and high-risk AI practices that pose significant risks to the health and safety of individuals or negatively affect the fundamental rights of persons. Based on the Parliament’s recommendations, chatbots and deepfakes would be considered high-risk because of their manipulative capabilities to incite biases or violence and degrade democratic institutions. The AIA also defines areas where real-time facial recognition is allowed, restricted, and prohibited and imposes transparency and other risk-mitigating obligations on those technologies.

Strengths and Weaknesses

The AIA acknowledges that AI poses potential harm to democracy and fundamental human rights and that voluntary self-regulation offers inadequate protection. At the same time, the Act also recognizes the obstacles regulations can pose to innovation and has established some programs to allow companies to explore new technologies, such as within the so-called regulatory sandboxes for small and start-up companies.

Among the AIA’s strengths is its legally binding character, which marks a welcoming departure from existing voluntary or self-regulatory AI ethics initiatives. These past initiatives often lacked proper enforcement mechanisms and failed to gain an equal commitment from tech companies and EU member states. The AIA changes this and successfully creates a level playing field in the EU digital market.

Other positive aspects include the AIA’s ability to address data quality and discrimination risks, its establishment of institutional innovations such as the European Artificial Intelligence Board (EAIB), and inducing transparency via the mandate that AI logs and databases be made available to the public (i.e., opening up algorithmic “black boxes”). The AIA is the most ambitious of current AI regulatory initiatives, and its innovations and new requirements for the industry may facilitate similar initiatives elsewhere, given the EU’s propensity to unilaterally alter global market dynamics.

Yet, from an AI ethics perspective, the AIA falls short of realizing its full potential: Experts are primarily concerned with the AIA’s proposed governance structure; they specifically criticize its:

• Lack of effective enforcement (i.e., over-reliance on provider self-assessment and monitoring and existence of discretionary leeway among standardization bodies).

• Lack of adequate oversight and control mechanisms (i.e., inadequate stakeholder consultation and participation, existing power asymmetries, lack of transparency, and consensus-finding problems during the standardization procedure).

• Lack of procedural rights (i.e., complaint and remedy mechanisms).

• Lack of worker protection (i.e., possible undermining of employee rights, especially those exposed to AI-powered workplace monitoring).

• Lack of institutional clarity (i.e., lack of coordination and clarification of competencies of oversight institutions).

• Lack of sufficient funding and staffing (i.e., underfunding and understaffing of market surveillance authorities).

• Lack of consideration of environmental sustainability issues given AI’s significant energy requirements (i.e., lack of mandating “green or sustainable AI”).

Reform Measures

To address these issues, several reform measures need to be taken, such as introducing or strengthening …

• Conformity assessment procedures: The AIA needs to move beyond the currently flawed system of provider self-assessment and certification towards mandatory third-party audits for all high-risk AI systems. That is, the existing governance regime, which involves a significant degree of discretion for self-assessment and certification for AI providers and technical standardization bodies, needs to be replaced with legally mandated external oversight by an independent regulatory agency with appropriate investigatory and enforcement powers.

• Democratic accountability and judicial oversight: The AIA needs to promote the meaningful engagement of all affected groups, including consumers and social partners (e.g., workers exposed to AI systems and unions), and a public representation in the context of standardizing and certifying AI technologies. The overall goal is to ensure that those with less bargaining power are included and their voices are heard.

• Redress and complaint mechanisms: Besides consultation and participation rights, experts also request the inclusion of explicit information rights, easily accessible, affordable, and effective legal remedies, and individual and collective complaint and redress mechanisms. That is, bearers of fundamental rights must have means to defend themselves if they feel they have been adversely impacted by AI systems or treated unlawfully, and AI subjects must be able to legally challenge the outcomes of such systems.

• Worker protection: Experts demand better involvement and protection of workers and their representatives in using AI technologies. This could be achieved by classifying more workplace AI systems as high-risk. Workers should also be able to participate in management decisions regarding using AI tools in the workplace. Their concerns must be addressed, especially when technologies that might negatively impact their work experience are introduced, such as constant AI monitoring at the workplace, e.g., via wearable technologies and other Internet of Things devices. Moreover, workers should have the right to object to using specific AI tools in the workplace and be able to file complaints.

• Governance structure: Effective enforcement of the AIA also hinges on strong institutions and “ordering powers.” The EAIB has the potential to be such a power and to strengthen AIA oversight and supervision. This, however, requires that it has the corresponding capacity, technological and human rights expertise, resources, and political independence. To ensure adequate transparency, the EU’s AI database should include not only high-risk systems but all forms of AI technologies. Moreover, it should list all systems used by private and public entities. The material provided to the public should include information regarding algorithmic risk and human rights impact assessment. This data should be available to those affected by AI systems in an easily understandable and accessible format.

• Funding and staffing of market surveillance authorities: Besides the EAIB and AI database, national authorities must be strengthened, both financially and expertise-wise. It is worth noting that the 25 full-time equivalent positions foreseen by the AIA for national supervisory authorities are insufficient and that additional financial and human resources must be invested in regulatory agencies to effectively implement the proposed AI regulation.

• Sustainability considerations: To better address the adverse external effects and environmental concerns of AI systems, experts also demand the inclusion of sustainability requirements for AI providers, e.g., obliging them to reduce the energy consumption and e-waste of AI technologies, thereby moving towards green and sustainable AI. Ideally, those requirements should be mandatory and go beyond the existing voluntary codes of conduct and reporting obligations.

Conclusion

As of the time of writing, the European Parliament has approved the revised AIA proposal, and negotiations with the Council have begun. Both institutions—and the Commission—must agree on a common text before the AIA can be enacted. It remains to be seen how the final version will look and if it will incorporate at least some of the suggestions made in this article.

The challenges posed by generative AI and its underlying large-language models will likely necessitate additional AI regulation. The AIA will need to be revised and updated regularly. Future work in AI ethics and regulation needs to be vigilant of these developments, and the Commission—and other governing bodies—must incorporate a system allowing them to amend the AIA as we adapt to new AI advancements and learn from the successes and mistakes of our regulatory interventions. 

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.