States are beginning to impose idiosyncratic rules on artificial intelligence chatbots and other offerings in response to harms to consumers. Rather than create a fragmented regulatory system that will discourage AI innovation and growth, the AI industry, with the help of the federal government, should encourage a ratings system to improve transparency and protection for consumers.
Litigation over two tragic cases in which teenagers died by suicide following prolonged use of artificial intelligence companions has become a dinner-table conversation, a PTA agenda item, and the subject of congressional scrutiny. That development is, in one sense, welcome. Public attention to the social consequences of modern AI systems is important to making sure advances in AI correspond with advances in public well-being.
But the policy response so far has been reflexive rather than reflective. States from California to Illinois have introduced or enacted laws that treat AI systems less like innovative general-purpose tools and more like banned substances. The result is an emerging patchwork of well-intentioned but incompatible regulations—each advancing a distinct, and sometimes contradictory, vision of what “safe” or “acceptable” AI should mean. For example, imagine that just two states mandate that companies only release “factually accurate” AI companions intended for use by children (a real possibility given that the California legislature passed a similar bill earlier this year that would have become law absent a veto by Governor Gavin Newsom). It’s likely (if not certain) that the two might define the appropriate response to similar queries related to sexual identity, faith, and politics in different fashions. AI companies, especially smaller firms, may find this technically and financially difficult to deal with.
That regulatory fragmentation risks stifling the very competition and experimentation among AI labs that could make AI safer and more beneficial. To balance consumer protection with innovation, we need a policy architecture that is both credible and flexible—one that promotes transparency and accountability without calcifying premature standards. The best model may come from an unexpected corner of American regulatory history: the Motion Picture Association’s voluntary film rating system.
The costs of fragmentation
The current wave of state-level AI legislation reflects a predictable cycle: a scandal produces moral outrage, legislators act, and regulation locks in early, often in ways that are difficult to unwind. Illinois, for example, has banned certain forms of AI-based therapy. As noted above, California lawmakers proposed detailed behavioral constraints on AI “companions” for minors, including a directive that the tools prioritize factual accuracy over a user’s beliefs and preferences. Utah legislators are considering similar rules.
Each of these interventions may appear narrow, but together they threaten to impose significant fixed costs on startups and smaller firms. A company operating across just two states might have to maintain multiple versions of its privacy policies, modify user interfaces, or alter model behavior to meet conflicting mandates. What is a rounding error for an incumbent with a legal department becomes a major financial strain for the average startup, which has a monthly operating budget of just $55,000. As compliance costs rise, the likely result is a regulatory moat: a market that rewards incumbents who can navigate complexity and punishes entrants who cannot.
This pattern is not new. Economic research on regulatory burden consistently finds that heterogeneous compliance regimes deter entry and concentrate market power. For AI—a general-purpose technology whose value depends on wide experimentation—those effects are magnified.
The false binary: protection or progress
The political debate has often framed AI governance as a choice between unchecked innovation and rigid control, but that dichotomy is false. Regulation and innovation are not inherently opposed; the challenge is to structure rules that make safety and competition mutually reinforcing.
Enforcement of existing consumer protection laws by the Federal Trade Commission (FTC) and state attorneys general, for instance, already provides a baseline for protecting consumers against fraudulent or misleading AI products. Pieces Technology, an AI firm based in Dallas, learned this the hard way. The company reached a settlement with Texas Attorney General Ken Paxton following his investigation of whether their generative AI tool was as accurate and capable as Pieces alleged. This deal illustrates that what’s missing is not the legal authority to punish bad actors but a shared language for transparency—a way for consumers to understand what kind of AI tool they’re using, what risks it poses, and how it has been tested or audited.
That gap is where a voluntary, standardized AI rating system could add the most value. Rather than impose one state’s vision of “good AI” on the entire country, such a system would create a market-based mechanism for signaling reliability, safety, and appropriate use.
Learning from the Motion Picture Association
In the 1960s, American filmmakers faced a similar dilemma. Local theaters enforced a vague yet expansive set of prohibitions. Studios could not easily distribute films nationwide without risking running afoul of one locality’s interpretation of what was effectively a moral code. The industry responded by developing the Motion Picture Association (MPA) rating system: a voluntary, standardized framework that informed consumers, reduced censorship risk, and preserved artistic freedom. Today, parents, kids, and consumers can easily assess if a PG-13, R, or NR movie is appropriate for their tastes and interests.
That model holds lessons for AI governance. Like film, AI models vary in their target consumer audience based on what they offer and how they curate information: some tools are designed for general audiences, while others address sensitive or specialized contexts. A voluntary rating system could classify AI systems according to their intended uses and risk profiles. Categories might include “child-safe,” “mental-health appropriate,” “financial-advisory,” or “expert-verified for legal or medical contexts.”
Developers could choose to participate in the rating process, submitting their models for independent evaluation by a third party against transparent criteria. In exchange, they would receive a publicly recognized certification signal that helps consumers make informed choices and mitigates liability risk. Participating developers would likely experience a financial boost as parents look for information about reliable AI products. How best to develop such ratings is an open question that warrants more thorough analysis. Two possible approaches include tasking an independent body of experts analogous to the International Electrotechnical Commission (IEC) with creating ratings or instead assign it to an industry coalition akin to the Payment Card Industry Security Standards Council (PCI SSC). Whatever the body, the ratings should evolve over time as models improve and as society’s tolerance for certain risks changes—just as film ratings have adapted to shifting norms.
The economics of voluntary standards
A voluntary system analogous to MPA ratings would not replace regulation; it would complement it by introducing a credible, low-friction information channel. Standardized disclosure frameworks have long improved market functioning in other sectors. Nutrition labels, Energy Star ratings, and the Consumer Financial Protection Bureau’s simplified mortgage forms all make complex products more intelligible without dictating design choices.
For AI, the economic logic is similar. Information asymmetry—between developers who understand their models and consumers who do not—is a classic market failure. Transparency reduces that asymmetry and allows competition on quality and trust rather than opacity or hype. Firms that can credibly signal reliability will attract users and investors; those that cannot will face market discipline without the need for blunt prohibitions.
Moreover, voluntary standards can scale globally. U.S. companies already face a compliance burden from the European Union’s AI Act, which classifies systems by risk level. A domestic, industry-driven rating framework could allow American firms to demonstrate good-faith compliance in ways that may meet international expectations while remaining consistent with U.S. principles of innovation and free enterprise.
Guardrails and governance
Skeptics will rightly ask: What prevents such a system from devolving into self-serving certification theater? The answer lies in governance and oversight. The MPA’s success stemmed not from government coercion but from public accountability and the group’s concerns about the reputations of its members. An AI rating system should be built on similar principles but with modern safeguards.
First, the system must be transparent in process: criteria for ratings should be public, evidence-based, and periodically updated. Independent auditors, research institutions, and consumer advocates should participate in reviewing both methodologies and outcomes.
Second, there must be a credible enforcement backstop. State attorneys general, under their authority to enforce unfair and deceptive acts and practices (UDAP) laws, can pursue companies that misrepresent or falsify ratings. To do that effectively, they need technical capacity—data scientists, AI auditors, and engineers—to evaluate claims. Additional funding from state legislatures as well as federal grants or interagency partnerships with the National Institute of Standards and Technology (NIST) or the FTC could provide that support. Unlike new legislation, such coordination could emerge quickly within existing statutory frameworks. Notably, attorneys general could also closely monitor the ratings development entity itself, ensuring that it rigorously vets tools and accurately applies its standards.
Third, the system must remain adaptive. AI models evolve at a pace that traditional rulemaking cannot match. A voluntary standard can iterate through versioned updates, reflecting new understandings of risk, explainability, or bias. Flexibility is essential to prevent what the economist Albert Hirschman called “the bias toward exit”: the tendency of innovators to flee overly rigid regimes.
Finally, the rating body must be structured in a way that allays concerns that a centralized private rating body could become a gatekeeper—imposing standards that reflect the biases or interests of a few dominant firms or narrow cultural perspectives. That risk underscores the importance of governance design. Whether the body looks more like the IEC or PCI SSC, the ideal AI ratings entity should be governed by a multi-stakeholder council composed of representatives from across the ecosystem: AI developers large and small, state regulators, consumer advocates, academics, and civil society organizations.
To prevent entrenchment or capture, these representatives should serve temporary, staggered terms, ensuring periodic turnover and the infusion of new expertise. Decision-making should be transparent and consensus-oriented, with clear mechanisms for public comment and review.
A useful precedent comes from the Forest Stewardship Council (FSC), a voluntary certification organization that balances environmental, social, and economic interests through an equal tripartite governance structure. The FSC’s model—dividing voting power among industry, environmental, and community chambers and requiring agreement across them—has preserved credibility for three decades while maintaining adaptability.
An AI ratings body built on similar principles would prevent domination by any single faction and create a living standard—flexible enough to evolve with technology, but legitimate enough to command trust from consumers, regulators, and innovators alike.
The federal role
While a voluntary system should be industry-led, the federal government can play an essential coordinating role. Agencies such as the National Telecommunications and Information Administration (NTIA), NIST, and the FTC could jointly issue a framework outlining desirable disclosure categories—data provenance, testing methodology, model limitations—without mandating how those disclosures are met. This approach mirrors successful public-private partnerships in cybersecurity and privacy, where voluntary frameworks (like the NIST Cybersecurity Framework) have become de facto global standards. Congress, for its part, could establish a modest funding stream to build technical capacity in state attorney general offices. Empowering attorneys general to enforce transparency and truth-in-labeling would make accountability more consistent and credible.
Weighing consumer protection against innovation
Finding the right balance between protecting consumers and encouraging innovation is less about choosing one over the other and more about ensuring both move forward together. When consumers trust new technologies, they’re more likely to use them. Greater adoption will expand relevant AI markets, which will attract investment and fuel competition. Narrow regulation may assist in setting up and sustaining that market, but any legal interventions must avoid imposing unnecessary complexity, smaller developers. A voluntary AI rating system helps strike that balance: it builds confidence without building regulatory barriers to entry, giving consumers the information they need while allowing companies to compete on quality and trust.
Rather than aiming for a world with zero risk, policymakers should aim for one with known and shared risk—a marketplace where people can choose the kind of AI experience they want. Some users will prefer tightly moderated, “safe-for-work” tools, while others will gravitate toward creative, experimental systems. The goal of policy should be to make those choices transparent and trustworthy, not to dictate a rigid statutory definition of acceptable use.
This pluralism contrasts with the emerging state-level approach driven by California, which risks imposing a single moral or epistemic template developed in Sacramento on a diverse public. In the long run, a fragmented regulatory environment not only harms innovation but undermines democratic accountability: it replaces collective deliberation about acceptable risk with parochial judgments.
A better default
The U.S. does not need 50 definitions of “ethical AI.” What it needs is a coherent national framework that enables competition on trust, empowers consumers with reliable information, and equips regulators with the tools to intervene when firms deceive or harm users.
A voluntary AI rating system offers a practical alternative to the growing patchwork of state rules: it builds public trust without suffocating innovation. Rather than forcing all developers to conform to one state’s moral code or compliance burden, a unified, evolving standard developed by a multi-stakeholder entity would let companies compete on credibility while giving consumers clear, comparable information about risk and reliability.
Author Disclosure: The author reports no conflicts of interest. You can read our disclosure policy here.
Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.
Subscribe here for ProMarket’s weekly newsletter, Special Interest, to stay up to date on ProMarket’s coverage of the political economy and other content from the Stigler Center.