Skewed incentives and the distribution of resources toward corporations have undermined the integrity of scientific research and contributed to the public’s distrust in expertise. Matt Lucky explores the political economy of scientific research and potential reforms to bring scientific research closer to its ideals.


Various scholars and commentators have lamented the crisis of public trust in scientific expertise. For instance, science skeptics, now in power at the United States Department of Health and Human Services and the Centers for Disease Control and Prevention, are purging traditional scientific medical experts from decision-making. The answer to this crisis, however, is not to admonish people to reflexively “trust the science” and obey experts. (Dis)trust in science has become assimilated into partisan identities in the past two decades, and we all need to rediscover how to assess scientific performance as objectively as possible.

The fact is that science has worked well enough to have built the contemporary technoscientific world, which is a damn fine achievement. Yet, it is frequently mired by mistakes, fraud, and incompetence. Part of the challenge here is the necessity to locate the Aristotelian golden mean between the extremes of excessive credulity toward science on one hand, and excessive incredulity on the other. We need to trust science in the right way, which entails maintaining a healthy understanding of the very real limits of what it can say. Applying scientific study to the practices of science itself can serve as a guide here. Science is a pattern of social behaviors, and we can apply scientific measurement testing to it just as economists can measure behavior in markets. While the science of science is imperfect, just as I contend science is generally imperfect, it mostly works. Hence, the science of science can serve as a useful tool for understanding and intervening in the world better than we could manage in its absence. 

A starting point for this discussion is to review the normative structure of science as articulated in Robert Merton’s The Sociology of Science, a foundational text in science and technology studies and a common reference point for how scientists ought to act and why society ought to trust them. Those norms set our expectations of what science ought to be. From there, we can contrast those norms against actual science in action to identify gaps between our ideals and realities. In practice, science can never perfectly realize the ideals we set for it. That creates a persistent trap for trust in science, as the public will inevitably be disappointed (outraged even!) when they see science in action failing to live up its ideals.

One path to trusting science in the right way, which includes healthy skepticism, is exploring this gap between ideals and realities. More precisely, we can trace how science falls short within the political economy of knowledge production—the incentives and distribution of resources that determine which research is conducted and how— which also points to paths to reform. Funding structures and career incentives for science and researchers tell much of this story. To unpack this narrative, I will center on the norms of 1) disinterestedness and 2) organized skepticism.

Disinterestedness

The ideal of disinterestedness instructs that a scientist ought not to have skin in the game, favoring one research finding over others. Philosophy of science instructs that we will never be perfectly disinterested. On a surface level, scientists must decide out of the whole of nature what topics interest them (or their potential sponsors) enough to focus on; we study things we value. What is more, even seemingly innocuous decisions in research methods can be normatively laden. These include choosing the P-value (loosely, the probability that a result would be observed by chance under the assumption that the null hypothesis is true) to determine statistical significance. Setting that P-value threshold is effectively making a gamble on the risk of obtaining a false result that could mislead public policy. Such a false finding at the Food and Drug Administration, for instance, can result in a carcinogenic substance erroneously being labeled as safe for human consumption. Thus, interests and subjective judgements inevitably leak into the practice of science on the margins. Yet, while scientists cannot be perfect, they can still approach the ideal.

On a practical level, scientists often deviate from the norm of disinterestedness because science is often prohibitively expensive for non-state and non-corporate actors. This is readily apparent for grand projects such as large particle colliders, though even comparatively smaller-scale social science research can be costly: a mere subsection on the Cooperative Election Study (CES), which polls tens of thousands of Americans on political opinions and voting choices, can cost thousands. Thus, scientists are frequently obligated to play the outside game beyond the laboratory to interest governments, corporations, and philanthropic foundations into funding their research projects. Without such patrons, modern “big science” would be impossible. Yet, these pecuniary requirements also open the door to conflicts of interest  that permeate science in action.

One way those conflicts of interest influence knowledge-making is through the agenda-setting power they bestow upon funders. Scientists cannot investigate all questions, so there is inevitably a pattern of topics that receive attention or neglect. Stanford professor John Ioannidis, a leading expert in metascience, illustrated this agenda-setting power at work in pharmaceutical research, lamenting at the Stigler Center 2025 Antitrust and Competition Conference that industry sponsorship means that scientists focus on investigating questions related to the products developed by industry to the exclusion of alternative questions. Zeynep Pamuk, a political theorist at the University of Oxford, for instance, recounts in her book how industry sponsorship for molecular studies on the health safety of genetically modified organisms (GMO) has developed extensive evidence for their safety as food products (great for industry!), but this has come alongside the neglect of studies considering the ecological and socio-economic effects of those same GMOs. Likewise, Ioannidis noted that we have little evidence for important questions such as the healthy (or not?) impacts of exercise on patients with pre-diabetes or heart disease. 

Scientific research is also structured by ownership and uneven access to the objects of science for study. The literal objects in the world we want to study, for instance, drug candidates and social media datasets, are generally the property of private institutions. In order to do science, researchers are obligated to obtain the consent of the owners of the objects of science, which further empowers the agenda-setting of those owners. Put simply, “what we know” is conditioned in part on “who owns what.”

The issues here with industry, alas, go further. Ioannidis cited in his conference keynote, for instance, a study finding that “among industry-funded trials, 364 (89.0%) reached conclusions favoring the sponsor” and further that “almost all nonrandomized industry-funded studies (90 of 92 studies [97.8%]) favored the sponsor.” Given this, Ioannidis remarked, we might simply cease wasting money on the actual work of science: once you know the study is sponsored by industry, you effectively know what direction the findings will point!

Career incentives for scientists provide another path away from the ideal of disinterestedness. Specifically, in science, there is an alignment problem where what we want from scientists (true knowledge) is different from what we ask of them (publications/citations), thus opening the door to specification gaming. That is, it is much easier to judge a scholar’s curriculum vitae than the epistemic merits of their contributions to an esoteric field, and hence scientists are incentivized to prioritize the former over purely disinterested scholarship. Ioannidis speaks to this issue, noting that “seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure.”

More perfidiously, career incentives create temptations to commit fraud, which we can see evidenced in contemporary retractions being increasingly driven by “large-scale, orchestrated fraudulent practices (papermills, fake peer-review, artificial intelligence generated content).” That is, the demand for markers of productivity in science is creating an industry of nefarious actors, such as paper mills, willing to supply badges of distinction to scientists in need. In another recent working paper, one team reports how they were able to buy citations to burnish their reputations.

Taken together, pecuniary and career incentives, that is, the economics of knowledge production, exert a persistent gravitational pull away from the norm of disinterestedness.

Organized Skepticism

Organized skepticism holds that scientists have a duty to subject each other’s work to challenge and verification to ensure the accumulated body of claims we call “science” is true enough. Put slightly differently, organized skepticism is the core guarantee backing up science. It promises quality control by weeding out errors, incompetence, and fraud through mechanisms such as peer review and experimental replications. When other norms, such as disinterestedness, are violated, organized skepticism is intended to be the final defense in depth that interdicts those earlier errors.

This promise that organized skepticism can generate a trustworthy foundation is not truly fulfilled in practice. Here, I will focus on replications given the ongoing replication crises across the sciences. Researchers conduct study replications of previously published work to ensure that the experiment churns out the same results each time. Repetition removes concerns that circumstantial influences or data manipulation produced the initial results. Alas, replications are infrequent, and when they are performed, they can reveal unsettling degrees of unreliability in previously published work. In the field of psychology, for instance, a team attempting to replicate 100 previously published studies found that only 39% of the replications produced results matching the originals

Across the various sciences, the ease of verifying results through replications can vary wildly in practicality. In political science, for instance, if I have doubts about someone’s analysis of data from the American National Election Studies, I can simply download the latest dataset and work on that in the R programming language. Yet, if someone’s project entails field work accessing administrative records in Gambia or a proprietary dataset owned by Facebook, then replications are off the table as a practical matter. Again, the ownership of the objects of science matters for replications as much as for initial research.

In a similar vein, replications often quite literally replicate the financial issues covered above. If a dissenting scientist wishes to challenge the findings of one laboratory, then they need their own laboratory with the equipment, expert staff, and financing. Not only is that another substantial financial burden, but it is one that often will not make sense career-wise for scholars. Reaching tenure for academics frequently depends on demonstrating research productivity by publishing in major field journals that prioritize novel lines of research. Scholars, obligated to play the outside game of raising financing through grant applications to finance their laboratories, have a clear incentive to prioritize allocating that hard-won cash to potentially tenure-securing new lines of inquiry, rather than pay an opportunity cost by doubling back on colleagues’ work. All of this adds up to the point where organized skepticism is like a check that is issued, but not meant to be cashed in except under dire need. 

There is also a deeper philosophy of science problem with many replications, which is that when a replication fails, it is difficult to say if the claims of the initial scholar were wrong or if the individual conducting the replication is in error. Sociologist Harry Collins sums up the conundrum here: “to find this [are gravity waves real?] out we must build a good gravity wave detector and have a look. But we won’t know if we have built a good detector until we have tried it and obtained the correct outcome! But we don’t know what the correct outcome is until…and so on ad infinitum.” This is what Collins describes as the experimenter’s regress. On the frontiers of knowledge, it is difficult to determine if the failure of a replication is the result of a fault in the claim or in the capacities of the replicator. Hence, a crisis of failed replications can be indicative of emerging technoscientific knowledge, rather than fraud or mistakes. That said, the experimenter’s regress is largely an issue on the frontiers of science, rather than in spaces with well-established methods.

Reforms 

Potential reforms can be understood as changing the economics of knowledge production to incentivize and enable better epistemic behaviors. For instance, legislation such as the Digital Services Act in the European Union that pries datasets from the hands of corporations and makes them publicly available can mitigate issues around disinterestedness and organized skepticism. That way, what we learn is less dependent on who owns what. Likewise, we might experiment, as Pamuk recommends, with new financing channels for science that can expand the national research agenda by gambling on novel lines of inquiry and even distributing a small number of grants from the National Academy of Sciences at random. Likewise, greater transparency obligations for conflicts of interest in scientific publications are another option

Finally, altering tenure review and publication practices to allocate greater weight on the margins to 1) replications and 2) replicated studies can shift career incentives toward investing more in knowledge verification. Ioannidis, for instance, suggests that studies can be marked as merely provisionally published until they have been replicated. Further, replications themselves ought to then be published alongside the original as their own contributions to the literature, and to make the train of verification procedures clear.

With all this said, the damndest thing that bears repeating is that science already mostly works. Science and technology built modernity, which is a miraculously fine achievement. Our path forward for policy for science, then, ought to recognize its true limits in practice, as revealed through science of science research. That way, when science in action fails to perfectly achieve its ideals, public expectations are not disappointed because we understand that scientists are flawed humans doing their best. What is more, we ought to apply that recognition to reforms that close the gap between our ideals and realities, so that when science falls short, the gap is narrowed. That way, we can trust in science in the right way, by avoiding the extremes of excessive (in)credulity.

Also, Listen to Bethany McLean and Luigi Zingales chat with John Ioannidis on a recent Capitalisn’t episode here.

Author Disclosure: The author reports no conflicts of interest. You can read our disclosure policy here.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.

Subscribe here for ProMarket‘s weekly newsletter, Special Interest, to stay up to date on ProMarket‘s coverage of the political economy and other content from the Stigler Center.