One of the questions that Elon Musk’s lawsuit against OpenAI and its CEO, Sam Altman, raises is whether Microsoft’s involvement in changes to OpenAI’s board in November violated nonprofit law. Benjamin Leff assesses this challenge and if current nonprofit law is capable of monitoring nonprofit behavior in its current form.


In February, Elon Musk sued OpenAI CEO Sam Altman and a variety of OpenAI entities. He is seeking, among other things, to prevent them from “utilizing OpenAI, Inc. or its assets for the financial benefit of any of the individual Defendants, Microsoft, or any other particular person or entity.” OpenAI, Inc. is a charitable nonprofit organization formed in 2015 to develop AI technology “for the public benefit” rather than “for the private gain of any person.” Musk was important in the founding of OpenAI, and he allegedly made tens of millions of dollars of charitable contributions to OpenAI. Subsequent to its founding, the nonprofit OpenAI formed a variety of for-profit entities that took on Microsoft as their primary financial investor, effectively operating a for-profit joint venture with Microsoft under the formal exclusive control of the OpenAI nonprofit board. According to Musk, a “Board coup” in November 2023, in which the board removed Altman as CEO only to bring him back under pressure from Microsoft and for several of its members to then resign, destroyed the “system of checks and balances” that were designed to “ensure the nonprofit mission was being carried out.” Musk concluded that the nonprofit OpenAI has now “abandoned its nonprofit mission” in favor of “personally enriching the Defendants.” According to the lawsuit, “That is not supposed to be how the law works in California or anywhere else in this country[.]”

Musk filed his lawsuit in California, arguing state-law causes of action that are likely to be dismissed or lose on their merits. But the fundamental question he is posing is whether nonprofit law is robust enough to enable entrepreneurs to make a credible commitment to funders and other stakeholders that their venture will be operated for the social good. In other words, how can charity funders be assured under the current legal framework that someone like Sam Altman won’t use their donations for self-enrichment?

In addition to state laws, this question is governed by the federal law of charities that can only be enforced by the IRS. And it is an existential one for the social enterprise business model that Musk, Altman, and others chose for OpenAI, as well as critical to the integrity of the nonprofit sector itself. Is it possible to use a nonprofit governance model to protect a social mission, or is that just an empty promise? The key to understanding whether nonprofit law is up to the task is the distinction between two poorly understood doctrines: “private inurement” and “private benefit.” They sound like synonyms, but confusing them makes it impossible to understand what nonprofit law does and what it does not do.

Private inurement occurs when persons who are in a position to influence a nonprofit organization use that influence to improperly enrich themselves. It includes two necessary elements: (1) someone who is in a position to influence the nonprofit (often called a “disqualified person”), and (2) an excessive benefit. Most nonprofit organizations have conflict-of-interest policies to protect themselves against the kind of insider benefit that constitutes private inurement. A nonprofit that engages in private inurement can have its nonprofit status stripped by the IRS or it can have very significant penalties assessed against it, against the person who receives the benefit, and against the other board members who knowingly approved the excessive benefit. Statute defines who is a disqualified person – Altman is one because he is the CEO and on the board; Microsoft, as an investor in the for-profit joint venture, is not.

Private benefit, on the other hand, occurs when a nonprofit organization provides an excessive benefit to persons who are not disqualified persons, even though the organization is controlled by a board that is sufficiently independent. Private benefit may result in a nonprofit organization losing its tax-exempt status, but only if the benefit is quite substantial, and no penalties short of dissolution are available to the IRS to punish it. The much stricter treatment of private inurement than private benefit is by design – nonprofit law is premised on the belief that more strict regulation is required when charities are controlled by persons with financial interests at stake, while financially independent directors should generally be trusted to make decisions about how to pursue the organization’s mission.

In this case, after raising $130 million of charitable contributions, nonprofit OpenAI decided it needed additional capital to pursue its mission effectively, and it created for-profit entities that entered into a deal with Microsoft who invested billions more, not as charitable donations, but as a business investment. It is not illegal or improper for a charitable nonprofit to raise capital in this way, but it must observe the inurement and private benefit rules when it does so. As far as inurement goes, the key is that when disqualified persons, like Altman, enter into the transaction, the return they receive should not be excessive in comparison to what a non-disqualified person could receive. This is a difficult question, but it is determined in the first instance by the independent board members (hopefully advised by good lawyers). Microsoft, because it doesn’t formally control the venture, is not a disqualified person, and its profits cannot be inurement.

The private benefit law that applies here, on the other hand, was developed in the 1990s in response to a wave of mergers between non-profit hospitals and for-profit hospital systems. In those cases, after several rounds of litigation, the IRS agreed that the test of whether a joint venture between a nonprofit and a for-profit satisfied the private benefit standard involved control of the joint venture. If the nonprofit had sufficient control over the business, then the IRS would presume that the joint venture was operated for the nonprofit’s charitable purposes; if the for-profit had excessive control, it would presume the opposite. This control test is premised on the theory that independent decision makers (those who do not have a financial interest in the outcome) should generally be trusted to prioritize the nonprofit mission of the charitable organization, even when the organization gets deeply involved with investors and for-profit partners. Like almost all of nonprofit law, it relies on disinterested private persons to make the most important decisions about what is in the public interest, with very little substantive oversight by government regulators.

The IRS’s control test is why OpenAI’s lawyers made nonprofit OpenAI’s independent board members unambiguously control the for-profit entities. So long as those independent board members have ultimate decision-making power over the business venture, the structure is legally permissible. Under that analysis, the fact that the board had the authority to oust Altman over Microsoft’s obvious objection is the clearest evidence that the structure was proper from a legal point of view. It suggests that it is possible for an independent board to use its judgment to advance a nonprofit mission to benefit humanity even when billions of dollars are at stake. If one cares about social mission over financial return, a corporate governance structure that gives ultimate authority to independent board members has the formal ability to ensure that prioritization.

On the other hand, the fact that the independent board members almost immediately changed course, agreed to rehire Altman, and then resigned, calls into question the ability of that structure to protect OpenAI’s social mission. Is the OpenAI board coup evidence that the control test relies on too many formal factors (whether disinterested persons sit on the board) and not enough substantive ones (who really has the power in the relationship)? Or did the existing board merely change their minds about what is best for the organization? The replacement directors will permit the board to continue to have a majority of financially independent members, and so the structure should continue to satisfy the control test, notwithstanding Musk’s allegation that they were “hand picked” by Altman.

The question Musk’s lawsuit and the OpenAI saga raises is if the control test is sufficient or do Congress and the IRS need to rethink it? The goal of a structure like the one that governs OpenAI is the goal that Musk and Altman both supported, at least rhetorically–that AI technology would be developed to benefit the public–and neither of them trusted private for-profit companies (even their own) to do that safely. The fundamental question is whether the nonprofit sector, given the financial and economic barriers, can have a role to play here, or whether it should cede the floor to the other giant players, business corporations or the government.

As a scholar of nonprofit law, I’m optimistic about the role of the nonprofit sector here. I think that a governance model that vests disinterested directors with a duty to pursue a social mission, and then puts them in charge of the ultimate decisions about how to pursue that mission, is the best chance we have for true social enterprises. I also think that nonprofit law is the best existing regulatory regime to protect and enforce that model, and so creating a tax-exempt nonprofit as the “parent” company in the structure, as OpenAI did, is the best existing approach. Under current law, the split between private inurement, which should be vigorously enforced by the IRS (and by state regulators), and private benefit, which is largely deferential to the judgment of independent directors as long as they control the venture, is the right regulatory approach. I’m not sure there is any obvious failing in the current legal regime, although it might be appropriate for the IRS or a state attorney general to investigate the inurement issue.

The lawsuit suggests that Elon Musk believed that keeping OpenAI’s products open source was an essential component of the social mission of OpenAI. But if he hadn’t wanted to leave that kind of decision to the discretion of independent board members, then OpenAI’s founding documents should have clearly identified that restriction. Without any explicit restriction, all that is definitely true is that Elon Musk and the directors of nonprofit OpenAI disagree about the best way for OpenAI to pursue its social mission. Nonprofit law properly sides with the independent directors over Musk in that disagreement.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.