Anthropic has formed an exclusive artificial intelligence consortium to use its general purpose artificial intelligence model, Claude Mythos, to identify and fix vulnerabilities in critical internet and digital infrastructure. Madhavi Singh warns this consortium, called Project Glasswing, could contravene antitrust law and argues for regulatory oversight to ensure that it does not become a front for an illegal cartel.
Several weeks ago, an “accidental” leak in Anthropic’s content management system revealed that the company had secretly developed a general-purpose model called Claude Mythos, which it claimed far exceeded the capabilities of existing models. The model’s reasoning and coding capabilities were so advanced that it identified thousands of security vulnerabilities in a single day, including flaws in every major operating system and web browser, and demonstrated the capacity to execute a completely autonomous cyberattack. These vulnerabilities affected critical internet infrastructure and had escaped expert human reviewers for decades.
Fearing that malevolent actors could weaponize the model to cripple critical infrastructure, Anthropic did not publicly release the model. Instead, it formed “Project Glasswing,” a defensive industry consortium of more than 40 companies that maintain critical internet infrastructure, including Apple, Google, Microsoft, Amazon, Cisco, CrowdStrike, NVIDIA, and JPMorganChase. This consortium would enjoy exclusive access to Mythos Preview and was mandated to use such access for the sole purpose of finding and fixing vulnerabilities in the systems that underlie the internet’s most vital interfaces. Through the alliance, the partners would share sensitive information and best practices with one another. Thus emerged the “AI Avengers,” a private league of corporate titans voluntarily undertaking the responsibility to guard civilization from an existential risk that is, quite literally, of their own making. The powerful and well-intentioned AI Avengers, like their Marvel Comics corollary, operate without government oversight or intervention. A significant wrinkle in this grand plan, however, is that the coalition risks violating Section 1 of the Sherman Antitrust Act, which prohibits combinations in restraint of trade or “cartels.”
Competition risks
Section 1 prohibits any anticompetitive concerted action including price fixing, bid rigging, market allocation, group boycotts, and information sharing. Project Glasswing clearly constitutes a concerted action, and its information-sharing protocols or the collective decision to exclude those outside the coalition from receiving critical data and best practices risk being categorized as an illegal restraint of trade. In recent years, information sharing has attracted significant antitrust scrutiny. The Department of Justice has made clear that information exchange alone can constitute a violation, even when it does not result in explicit price-fixing. In the context of Project Glasswing, 40 of the world’s most powerful firms are sharing technical data and “best practices” in a private circle, creating a transparency vacuum that risks aligning their market behavior in ways that suppress competition from those outside the loop.
The coalition also raises the risk of being viewed as a concerted refusal to deal, which could be per se illegal. Since only consortium members receive access to this highly advanced model, they gain a massive advantage in safety-proofing their systems before their rivals can. For example, Google can secure its systems (like its browsers) while the numerous third parties that compete with its many verticals are denied the same opportunity. Once the model is eventually released publicly, these rivals might be more vulnerable than Google’s systems, granting powerful incumbents a significant competitive edge.
Similarly, while CrowdStrike is part of the coalition, other smaller cybersecurity firms are left out, further skewing the playing field in favor of the incumbents. Such a coalition risks giving Big Tech another chance to gatekeep advanced technical capabilities, skew the playing field, artificially restrict supply, and create an environment of scarcity that suppresses competition. Interestingly, while some rivals like Google (which operates Gemini) are part of the coalition, others like OpenAI are absent. Some notable Big Tech players like Meta, and Oracle are also missing from the coalition, presumably because they are deemed not to operate “critical internet infrastructure.” It remains unclear how the criteria for this designation were established.
While the security risks posed by advanced AI models are undeniable, AI firms also have an unfortunate history of “crying wolf.” OpenAI initially held back the release of GPT-2, warning that the model could be “too dangerous,” but when it was eventually released, the anticipated large-scale harms did not materialize. Some critics argue that the security theater surrounding Claude Mythos is actually a marketing gimmick—a classic Big Tech tactic that invokes privacy and security as pretexts to avoid open-sourcing models or sharing them more widely. It is therefore crucial to ensure that this defensive AI coalition does not become a front for an illegal cartel.
Regulatory oversight of AI Avengers
Fortunately, tools exist that allow companies to engage with the government to receive formal views on the legality of their collaborations. The DOJ allows companies to submit a business review letter to receive guidance on the application of antitrust laws to their proposed conduct. Indeed, the most frequent requests involve proposals to form joint ventures or to share business information. This process could provide necessary guardrails for the scope of information exchange, the duration of exclusivity, and the specific purposes for which the model may be used.
For instance, the process could scrutinize how the coalition intends to ensure that partners use the model solely for system resilience and not to improve the quality of their commercial offerings to gain a head start over those excluded from the alliance. Such a process could also investigate whether the security reasons for restricting output are truly warranted and do not undermine competition more than is strictly necessary. It could stipulate that the information exchange remain limited to security risks rather than becoming a channel for other cartel-like behavior and ensure that the best practices established by the coalition do not become de facto industry standards that operate as entry barriers for other players.
This process can even work on an expedited timeline. During the COVID-19 pandemic, the DOJ issued multiple business review letters declining to challenge collaborative efforts among medical suppliers to accelerate the production and distribution of personal protective equipment (PPE) and medication. While a business review letter (like merger review) does not preclude subsequent enforcement actions, it significantly reduces the likelihood of such an action and provides essential guidance on permissible conduct.
In February 2026, the DOJ and Federal Trade Commission launched a new joint public inquiry seeking input on updated guidance for competitor collaborations, specifically highlighting emerging technologies and data sharing as areas of concern. Thus, federal antitrust authorities are not unsympathetic to the need for some degree of industry-level collaboration to tackle the thorniest challenges posed by technological breakthroughs. Seeking guidance from the regulator would ensure that such “prosocial” collaboration also remains procompetitive. Separately, there are plans to establish an AI-ISAC (AI Information Sharing and Analysis Center) to promote the sharing of AI-security threat information and intelligence across critical infrastructure sectors. The ISAC would help AI companies exchange information about security threats through a supervised framework under the Department of Homeland Security, ensuring they don’t feel the need to create their own parallel, private Avengers-like defensive consortium.
Conclusion
In “Avengers: Age of Ultron,” Tony Stark (Iron Man) and Bruce Banner (the Hulk) attempt to build a preemptive global defense AI which, in an ironic and entirely foreseeable twist, escapes and becomes the very existential threat it was designed to fight. The irony of Project Glasswing is similar: the threat these companies seek to guard against is their own creation, abetted by the remarkable hubris of its creators who believe that only they can fix the chaos they unleashed. Although a critical distinction is that unlike the selfless fictional superheroes, these AI Avengers are real profit-maximizing corporate entities. Their primary loyalty lies with shareholders and private owners, not the public interest. Consequently, they warrant more skepticism and regulatory oversight than “Earth’s mightiest heroes.”
Author Disclosure: The author reports no conflicts of interest. You can read our disclosure policy here.
Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.
Subscribe here for ProMarket’s weekly newsletter, Special Interest, to stay up to date on ProMarket’s coverage of the political economy and other content from the Stigler Center.





