Home News Commentary Why Global Coordination is Necessary for Regulating AI

Why Global Coordination is Necessary for Regulating AI

0

Johannes Fritz and Tommaso Giardini examine the state of AI rulemaking around the world and find that, despite global alignment on principles, execution at the national level diverges on three important metrics. The risk is fragmentation in AI as firms choose to exclude entire markets rather than navigate the intricacies of compliance in different regions.


Governments around the world are regulating Artificial Intelligence at full speed but in blind flight. The Digital Policy Alert, an initiative of the Swiss-based St. Gallen Endowment for Prosperity Through Trade overseen by the authors, has documented over 450 policy and enforcement developments targeting AI providers in the past year alone. The problem, however, is not the pace of rulemaking but rather the lack of coordination: Governments align on broad objectives at global summits but do little to align their concrete national rules. Without coordination, rulemakers worldwide risk fragmenting the global AI market, jeopardizing both innovation and their own regulatory objectives.

We have begun to see isolated efforts at regulating AI from many countries, perhaps out of a desire not to repeat the mistakes of sitting on the sidelines during past technological advancements, for example social media. High-profile developments such as the European Union’s AI Act and the United States Executive Order on AI have dominated headlines. However, less publicized but equally significant moves are occurring in other countries. China has implemented three technology-specific AI regulations. Argentina, Brazil, Canada, and South Korea have introduced comprehensive legislative proposals on AI. Recently, Mexico and Indonesia have started deliberating AI rules, too. Notably, this rulemaking unfolds in the absence of a common language or factual base to guide regulatory approaches.

Without international coordination, national AI rules diverge even when they pursue the same objectives. ProMarket recently polled experts on the recommended objectives of AI regulation, which include privacy, competition, and non-discrimination. Governments share these objectives and recognize the need for international coordination, as evidenced by the endorsement of the OECD AI Principles by 47 governments. The OECD AI Principles focus on inclusivity, fairness, transparency, safety, and accountability.

Despite international alignment on high-level principles, our analysis shows that domestic AI rules diverge on three layers. We analyzed 11 comprehensive AI rulebooks, paragraph by paragraph, coding 70 regulatory requirements that implement the OECD AI Principles. The three layers of divergence we find are:

  1. Principle Prioritization: Governments prioritize the OECD AI Principles differently, focusing mainly on accountability and fairness.
  2. Implementation Approaches: When addressing the same OECD AI Principle, governments impose varied regulatory requirements, drawing from ten policy areas.
  3. Granular Requirements: Even when imposing the same regulatory requirement to implement the same OECD AI Principle, the specifics differ across borders.

Take, for example, the three-layered divergence between China and the EU:

  1. China prioritizes fairness, devoting over 30 percent of its requirements to this principle. The EU focuses on accountability with over 40 percent of its requirements and dedicates only 15 percent of its requirements to fairness.
  2. When regulating AI fairness, both China and the EU impose non-discrimination and data protection requirements. But China additionally demands content moderation, while the EU demands human oversight.
  3. Even though China and the EU both regulate non-discrimination, granular differences persist. China requires providers of generative AI to prevent the generation of ethnically discriminatory content and providers of recommendation algorithms to prevent price discrimination. The EU prohibits certain forms of discrimination altogether, such as social scoring, and establishes obligations for “high-risk AI systems,” including to avert training data bias and continuously monitor discrimination risk.

The Threat of Market Fragmentation

Divergent rules risk fragmenting the global AI market into separate, regional blocks. Firms struggle to comply with different regimes in different locales. To understand why this is a problem for the global economy, take the example from a recent ProMarket piece on online child safety rules. The authors of the article explained how regulatory complexity can lead companies to restrict access for all children—creating an “adults-only” internet— rather than navigate compliance intricacies.

Divergent AI rules have a similar effect but may lead AI firms to close out entire markets. Regulatory divergence multiplies the complexity of compliance for firms to the point that some reduce their international operations. The costs of complying with multiple, divergent regulatory environments can be prohibitive. Small firms, with fewer resources for compliance, are hurt disproportionately by these costs.

The cost of a fragmented AI market is the long-term loss of foregone innovation and competition. We do not argue that AI regulation stifles innovation per se, but rather that uncoordinated and divergent AI regulation stifles innovation by fragmenting the global AI market. AI is becoming a ubiquitous input for economic activities across the world. If AI firms exit a country due to compliance complexity, the country’s consumers and firms lose access to a catalyst of innovation—with negative consequences that accrue over time. At the global level, since divergence hurts small firms the most, only large firms may afford to maintain a presence across the globe and thus benefit from economies of scale. This hinders competition and, subsequently, global AI innovation.

Evidence of AI market fragmentation is already emerging. Meta recently delayed the launch of its AI tools in the EU due to regulatory concerns. OpenAI fully exited China in 2024. Apple removed generative AI apps from its Chinese App Store in view of novel rules and will not bring its AI to Europe due to antitrust regulations. If AI rules continue to diverge, the unintended fragmentation of the global AI market will continue to unfold.

Counterproductive Outcomes

A fragmented AI market is counterproductive for governments’ regulatory objectives. Governments worry primarily about their own citizens. But as AI transcends borders, so do regulatory objectives. By pursuing the same regulatory objectives without international coordination, governments are unintendedly jeopardizing not only the global AI economy, but also their original objectives.

For example, incident notification requirements aim to rapidly inform authorities and those affected by AI incidents, to then enable rapid mitigation. AI incidents occur when AI use (or malfunction) violates human rights, damages property or disrupts infrastructure—for example when a self-driving car hits a pedestrian or when an AI malfunction causes a data breach. Requirements diverge regarding the timeline, content, and addressees of incident notifications. Firms affected by an incident will thus spend considerable resources on the various notifications, rather than moving on to mitigation. Note that this issue is not specific to AI, as incident notification is a standard cybersecurity requirement. But governments, including Brazil and the European Union, are imposing AI incident notification requirements on top of existing requirements in cybersecurity and data protection laws. AI firms affected by incidents must thus issue several notifications in each market, navigating intranational and international divergence—which hardly contributes to the objective of safety.

Regarding transparency, our analysis reveals that requirements are often specific, for example targeting high-risk AI systems or deep fakes. Everyday users, however, are rarely aware of the specific context in which rules apply. This can create a false sense of security, as humans get accustomed to transparency in the specific contexts regulated by their governments. This endangers humans’ ability to detect the use of AI when transparency is not required. Since each government demands transparency in different contexts, divergent transparency expectations emerge. Thus, internet users with an international media diet become increasingly unable to identify AI interactions—despite widespread international alignment on this being a core concern on AI transparency.

Towards Regulatory Interoperability

A certain level of regulatory divergence is inevitable, since countries have competing interests and differing normative values. It is neither realistic nor desirable to champion a universal AI rulebook. Instead, governments should strive for regulatory interoperability, meaning the crafting of bridges between varying regulatory environments. The pursuit of interoperability is urgent because AI rulemaking is still nascent, and some are already taking up the mantle. For example, Singapore’s Infocomm Media Development Authority and the United States’ National Institute of Standards and Technology have developed a crosswalk between their respective “AI Verify” and “AI Risk Management Framework.”

Governments have a unique opportunity to learn from each other and render AI rules interoperable. A recent ProMarket piece explains the benefits of “bottom-up coordination” of international AI supervision. Governments can further draw from the expertise accumulated by national regulators and other governments. For example, governments looking to establish information rights can learn from Brazil’s precise elaboration of information to be disclosed, South Korea’s detailed procedure for requesting information, and the EU’s unique exception mechanisms.

To facilitate interoperability, we developed the Digital Policy Alert to provide transparency through two public goods: Our Activity Tracker documents daily developments in AI rulemaking across the globe, and our CLaiRK Suite digitalizes AI rulebooks and provides tools to navigate, compare, and interact with AI rules. Building on these tools, governments, researchers, and firms can jointly tackle the challenge of regulatory divergence and prevent the unintended, but predictable, consequences of AI fragmentation.

Author Disclosure: the author reports no conflicts of interest. You can read our disclosure policy here.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.

Exit mobile version