Jérémie Haese and Christian Peukert present new empirical findings on core open source technologies for the web and AI. Open source holds promise for making AI systems more transparent and secure, but it risks masking continued centralized control under the guise of openness.


Open source software available for free for everyone to use and modify is having a regulatory moment. From model weight disclosures to mandates to release code to the public, governments are rapidly embracing openness as a lever to govern artificial intelligence. The premise is that open source will make AI systems more transparent, more secure, and more competitive. But that premise rests on theory more than evidence.

The economic literature on open source has grown significantly over the last few decades: scholars have thought about openness as a public good, a reputation-building tool, a commons-based peer production system, and a counterweight to dominant incumbents. However, we lack systematic empirical evidence about what happens when large firms adopt existing open-core infrastructure—foundational open source software that forms the base of a product, often built upon by developers with their own proprietary software. Does their influence and market power corrode the quality of open source infrastructure or benefit it? This ignorance is particularly acute with respect to high-stakes, commercially salient domains such as AI. We also know little about whether open cores actually attract independent contributors and improve security standards, or instead serve to reinforce large firms’ control under the banner of transparency.

Our empirical findings help ground this discussion. In one recent case, when Microsoft adopted Google’s open source Chromium core for its Edge web browser, concerns quickly arose about loss of diversity—nearly all other major browsers already utilized Chromium—as well as the loss of independence from Chromium and crowding out of community effort. However, a detailed analysis of contributions, security reporting, and product development around this episode suggests a more complex pattern. Microsoft’s entry coincided with a significant influx of new contributors to the Chromium project, a sharp rise in reported vulnerabilities and patching activity, and a rapid convergence of functionality toward shared standards. Rather than shrinking, the open-core community grew, and security responsiveness increased as Microsoft and Google began coordinating on fixes.

Recently, web browsers as portals for competition and innovation have again entered the spotlight. In October 2025, OpenAI released ChatGPT Atlas, a browser built on Chromium and integrated with its AI assistant. Like Edge before it, Atlas demonstrates how open-core strategies are increasingly central to the design of user-facing AI products and platforms. The architecture of modern AI often relies on shared infrastructure maintained by other firms, raising similar questions about control, interoperability, and long-term stewardship.

To test whether the pattern observed in the Edge case generalizes to modern AI products, we examined 13,740 open source AI repositories on the developer platform GitHub. These projects include both community-led and company-led efforts. We tracked changes in contributor activity before and after a new commercial entity began contributing to the project. The results are striking: in both community-governed and firm-led projects, external contributions increase after a company joins. These increases are not short-lived; they persist over the following year. Internal (owner) contributions rise as well. The presence of a new commercial participant appears to mobilize, rather than displace, surrounding open source activity.

These findings provide a rare empirical look at how commercial engagement in open source AI development affects participation. The lesson is not that firms always improve community health, but that entry—when accompanied by real contributions—can generate sustained, complementary effort from outside contributors. That evidence runs counter to a common narrative in both policy and public discourse: that corporate involvement necessarily degrades the open source environment and weakens independent oversight.

At the same time, the risks are real. Shared cores can become chokepoints. A single governance failure—or a shift in the core’s dominant sponsor’s incentives—can cascade across the ecosystem. For instance, in 2021, the company behind Elasticsearch, an open source search engine, changed its licensing terms, leading to a major community split and forcing others to build alternative versions of the software. Similarly, Oracle was widely criticized for putting profit ahead of community needs after taking over Java, sparking concerns about long-term trust in the platform. And in 2014, a serious security bug in OpenSSL (software used by much of the internet) revealed how under-resourced and fragile critical open source infrastructure can be. Evidently, formal open access to source code does not ensure a community’s meaningful influence over the direction of development.

These are political economy concerns, not just technical ones. As Filippo Lancieri, Laura Edelson, and Stefan Bechtold argue, the AI regulatory environment is increasingly shaped by global strategic behavior. Governments compete to attract investment and shape global standards. Firms respond with lobbying, arbitrage, and in some cases, tactical openness—releasing code not for the sake of accountability, but to deflect liability or preempt scrutiny. In this context, open source is not politically neutral. It can be used to channel regulatory attention away from more structural concerns. Without a deeper empirical understanding, policy risks treating openness as an end in itself.

Open source can support accountability and resilience, but only under specific conditions. Policymakers should shift their focus from the existence of openness toward its governance. Who maintains the core? Who decides which features are adopted or delayed? What mechanisms exist for dispute resolution, security prioritization, and balancing commercial incentives with public interest?

Regulators should also demand more meaningful disclosure—not just of models’ weights, but of training datasets, training procedures, contributor diversity, vulnerability response times, and governance procedures. Public procurement and safety certification processes should require evidence of independent participation and responsiveness in open-core projects. And competition authorities should treat open infrastructure as a potential locus of market power—not just in product markets, but in the processes that shape technical evolution.

The promise of open source in AI is real. It may help decentralize control, accelerate scrutiny, democratize tooling, and invigorate open-core project communities But that promise depends on institutions, incentives, and transparency around the processes that govern shared infrastructure. Without evidence, openness risks becoming symbolic—celebrated in theory, ineffective in practice.

If open cores are to support AI accountability, security, and competition, empirical research must inform how these models are designed, maintained, and governed. The time to build that foundation is now.

Author Disclosure: The author reports no conflicts of interest. You can read our disclosure policy here.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.

Subscribe here for ProMarket’s weekly newsletter, Special Interest, to stay up to date on ProMarket’s coverage of the political economy and other content from the Stigler Center.