Two professors of law assess the merits and questions raised by Musk’s recent lawsuit against OpenAI and its CEO, Sam Altman.


Dana Brakman Reiser, Centennial Professor of Law at Brooklyn Law School

Elon Musk is suing OpenAI and its directors for, among other things, breach of fiduciary duty. The alleged breach consists essentially of allowing the use of OpenAI’s assets (including funds donated by Musk) for pursuit of private, for-profit purposes rather than the charitable nonprofit ones for which OpenAI was founded. OpenAI describes itself as a “nonprofit” with a “goal [] to advance digital intelligence in the way that is most likely to benefit humanity as a whole.” Musk’s complaint argues its operation of an artificial intelligence system (GPT-4) behind a paywall, plans to license future AI technology to Microsoft, and provision of a board observer seat to Microsoft betray this nonprofit mission.

Musk’s claim may or may not be right. But he is surely the wrong person to bring it. The problem is it’s unclear whether anyone else will.

Like OpenAI, nonprofit organizations often take corporate legal forms. This makes it easy for those steeped in for-profit corporate law to assume disgruntled donors or supporters can—like unhappy shareholders —sue to challenge the actions of their fiduciaries. They cannot. Standing to challenge nonprofit fiduciaries’ compliance with their duties it instead typically limited only to fellow fiduciaries and the state attorney general. Elon Musk is neither.

He is, of course, a large donor. But once gifts have been made, donated assets are no longer donors’ property, and they lose the authority to sue to protect them. Unless a gift instrument provides for donor standing to challenge its use, typically donors must rely on a charity’s fiduciaries and the state attorney general for enforcement. Even contractual standing rights will not entitle donors to challenge a nonprofit’s compliance with its overall mission, as even substantial donations do not make donors the arbiter of a nonprofit’s purpose.

Perhaps surprisingly, standing to sue nonprofit leaders is actually limited in order to protect their missions, which are myriad and sometimes divisive. Endowing contributors or the public at large with standing would subject controversial nonprofits to potentially draining nuisance suits. Competitors could use fiduciary challenges to wrest value from nonprofit entities to the ultimate detriment of their charitable missions. Filling the seats of nonprofit boards with dedicated and expert directors would become even more challenging.

These well-intentioned standing limits make enforcement by the state attorney general all the more critical; but here, as too often, it may not be up to the task. OpenAI created an organizational structure designed to ensure its nonprofit mission remains paramount, but its directors’ and officers’ actions in commercializing its technology may have crossed the line. If so, attorney-general enforcement is appropriate and warranted. But attorneys general are notoriously underresourced and this suit would take on a massive and contentious industry with the potential to infuriate multiple tech billionaires. Will even an attorney general with grave concerns choose to absorb the litigation costs and political fallout from a suit against OpenAI’s leaders? This potential vacuum – even when billions in charitable assets are at risk – reveals afresh the risks of underenforcement in the nonprofit sector and the need for creative solutions.

Anupam Chander, Scott K. Ginsburg Professor of Law and Technology at Georgetown University

According to Elon Musk, the fate of humanity may depend on the resolution of his lawsuit, filed earlier in March, against Sam Altman and the company Altman co-founded and runs, OpenAI. Musk claims the lawsuit is designed to ensure that artificial intelligence benefits humanity, rather than becoming “super-intelligent, surpass[ing] human intelligence, and threaten[ing] humanity.”

Musk argues that Altman and OpenAI betrayed OpenAI’s founding raison d’etre to be the anti-Google, or “the opposite of Google,” as Musk’s complaint describes. OpenAI’s technology would benefit humanity, rather than benefitting a few rich people. OpenAI’s 2018 charter declares that “[o]ur primary fiduciary duty is to humanity.”

Musk’s claims for breach and false promises revolve around two central moves by OpenAI—the decision to partner with Microsoft allegedly undermining OpenAI’s public benefit, and the refusal to make public more information about its latest advances.

This betrayal from the original mission, Musk argues, results in a breach of contract, a claim for promissory estoppel (promises made enforceable by law to avoid injustice), a breach of fiduciary duty, and constitutes unfair competition.

The central difficulty for Musk is that the language of OpenAI’s commitments are, excuse the pun, open-ended.

Musk notes that the 2015 Certificate of Incorporation requires OpenAI to “benefit the public.” However, Altman can certainly argue that the free public access to ChatGPT 3.5 evidences OpenAI’s public benefit. After all, millions of people across the world are using OpenAI’s free services. The founding documents do not commit OpenAI to benefit the public with every activity. Even a not-for-profit can sell goods or services in order to raise funds for its operations. Unless Musk or some other benefactor had been willing to give orders of magnitude more funds to OpenAI, its efforts to advance AI would need another funding source. It’s deal with Microsoft is seen as providing crucial advantages to both companies—for OpenAI, the computing resources to develop its models, and for Microsoft, access to advanced AI models that it could use in future products.

Musk’s second complaint is about OpenAI’s lack of, as it were, openness. (Musk even suggested that if OpenAI changed its name to ClosedAI, he would drop the suit.) OpenAI has in fact become less and less open, providing less information about its AI models as they have advanced. OpenAI’s 2015 Certificate of Incorporation, however, only declared that “the corporation will seek to open source technology for the public benefit when applicable.” This loose commitment is not particularly demanding. Moreover, it is certainly plausible that Altman and OpenAI may have drawn the curtains around their most advanced technology in part to avoid that technology falling into the wrong hands.

Musk faces a number of hurdles in his lawsuit, not least his own statements and actions during the course of his relationship with OpenAI. But the central premise of his lawsuit is itself fairly weak.

Humanity’s fate will likely not be decided with this suit.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.