Elon Musk recently sued OpenAI over claims that the company has strayed from its social mission and has instead focused on profit maximization. Roberto Tallarita examines how Musk’s lawsuit shows well-intentioned corporate planners how hard it is to commit to an effective and enforceable social purpose and warns policymakers that relying on corporate self-regulation of AI could be a fatal mistake.


Elon Musk’s lawsuit against OpenAI has cast a public spotlight on one of the hardest problems in artificial intelligence (AI) policy: How can we make sure, as a society, that the most powerful AI models are compatible with human values and safety?

The potential social consequences of AI, good and bad, are so large that there is little doubt that the government must play a crucial role in addressing this question. Meanwhile, however, private actors have been trying to provide their tentative answers. Those proposed by the leading AI labs share a common feature: these firms adopt non-traditional governance structures aimed at protecting their social purpose from the pressure of profit maximization.

OpenAI is controlled by a nonprofit with a mission “to ensure that artificial general intelligence (AGI)…benefits all of humanity.” Anthropic, a competitor, is a “public benefit corporation” in which a “long-term benefit trust” has the right to appoint some members (and, ultimately, a majority) of the board. DeepMind was sold to Google in 2014 on the condition that an independent “ethics board” would oversee the development of its artificial general intelligence technology.

These governance experiments are based on the correct realization that in a traditional corporate governance structure no one can effectively enforce a social purpose. Corporate boards can and often do make emphatic proclamations of social responsibility, but there is no mechanism for holding them accountable for these promises. In fact, corporate boards face powerful pressure from investors to deviate from social goals and yield to profit maximization. That’s why AI labs have opted for creative structures that try to insulate boards from this pressure.

Achieving insulation from investor pressure, however, is easier said than done. AI companies need massive funding, and investors are unlikely to provide the money without asking for, and receiving, control rights to influence corporate strategies. The failed attempt by the OpenAI board to fire CEO Sam Altman in November 2023 due to safety concerns demonstrates that even socially oriented governance can struggle to counteract the overwhelming force of the profit motive, in this case from OpenAI investors including Microsoft and Thrive Capital.

More importantly, insulation is not enough. Even if investors and top executives were successfully subjected to a nonprofit entity or an independent board, the question remains: Who will ensure that the board does not stray from its mission? Who can enforce the promise of a safe AGI? Who has good reasons, economic incentives, and effective power to do it? And who will?

In his lawsuit, Elon Musk claims to play precisely this role. The complaint includes an email sent by Altman to Musk in 2015, in which Altman wrote that the “[d]evelopment of superhuman machine intelligence… is probably the greatest threat to the continued existence of humanity,” and proposed the creation of a new AI lab with a social mission. Musk now complains that OpenAI, the company born from that proposal and to which he provided startup funding, has betrayed this mission. In his lawsuit against OpenAI and Altman, he now asks the court to order that OpenAI comply with its social goal and return the improper profits. Ironically, information has since come to light that Musk also thought OpenAI should be in part a for-profit entity.

Musk’s case has several weaknesses, but some of them are instructive. They show well-intentioned corporate planners how hard it is to build an enforceable social purpose; and they warn policymakers that relying on corporate self-regulation of AI could be a fatal mistake.

First of all, it is not at all clear whether the “founding agreement” that Musk is trying to enforce is a real contract. Furthermore, even if there were such a contract, the commitment to develop AGI “for the public benefit when applicable” is flexible at best. Who gets to decide when the social purpose is applicable? The board, obviously.

To be sure, a donor or a socially minded investor could try to avoid these mistakes by writing a real and detailed contract restricting the ability of an AI lab to deviate from its social goal. What should the contract say? Perhaps it could prohibit specific transactions, such as the sale or license of assets to for-profit firms, which would subtract assets from its social mission, or the granting of stock options to executives, which would realign incentives away from the social goal.

But no contract can specify how to gauge the moral fiber of the next board member, which CEO to hire, or when to fire her, or how to balance business and social needs in specific situations. Such a level of foresight is simply impossible, and excessive contractual limitations on the board’s discretion would be impractical and possibly legally invalid. Furthermore, no firm can be a serious player in a cutting-edge sector with too many predetermined constraints on its decision making. A contractual straitjacket is simply a non-starter, for practical, economic, and legal reasons.

Since the board will have to keep substantial discretion on key decisions, the viability of the social purpose will ultimately depend on who sits on the board, and their character, processes, and incentives. Character, however, is hardly legible from the outside. To be taken seriously, AI labs should come up with mechanisms that create external pressure on present and future board members.

Two additional limits of Musk’s lawsuit against OpenAI are a good starting point to think about this problem. First, Musk relies on the little public information available on the inner workings of OpenAI and tries to guess the rest. Undoubtedly, a transparent corporate organization would be more vulnerable to scrutiny.

A well-intentioned corporate planner of an AI lab should therefore commit to a level of transparency similar to that of public companies: transparency on ownership structure, executive pay, transactions with related parties, financial statements, charter and bylaw provisions, and major contracts. Public scrutiny is not a perfect substitute for legal enforcement, but it’s far better than the status quo of opacity.

Second, Musk is trying to enforce the fiduciary duties of the OpenAI board, but nonprofit fiduciary duties are typically enforceable only by state attorneys general, who are too busy and understaffed to police nonprofit organizations. AI labs can signal a credible commitment to a social mission only by providing outsiders with actionable powers and strong incentives to enforce this mission.

This is, after all, the traditional way in which liberal democracies establish “checks and balances.” In the famous words of James Madison in Federalist 51, “ambition must be made to counteract ambition.” Madison’s view is based on the sober recognition that people are not angels. Angelic nature would be best, of course, but the second-best system is a “policy of supplying, by opposing and rival interests, the defect of better motives.”

A similar approach should apply to AI private governance. AI companies should set up mechanisms that ignite the ambition of outsiders to counteract the ambition, and potential disloyalty, of the insiders. In the pursuit of for-profit goals, corporate planners do this all the time. They and their advisers constantly come up with ingenious tools that provide financial incentives to outsiders in ways that align with corporate goals.

Is it possible to design similar instruments for AI safety goals? The ideal instrument should make the safety commitment credible and enforceable even if the original decision makers are replaced or change their mind. It is a very difficult challenge, perhaps insoluble even. But only once we see bold, creative experiments in this direction, we can start trusting the authenticity of AI corporations’ social commitment.

As Montesquieu observed, “Happy is it for men that they are in a situation in which, though their passions prompt them to be wicked, it is, nevertheless, to their interest to be humane and virtuous.” While policymakers figure out how to act (and they should do so quickly), we should hope that AI corporate leaders will follow their best passions, rather than their wicked ones; but, to obtain this temporary license from society, they can’t ask us to trust their good intentions. Instead, they should prove that we can rely on their interest.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.