Home Market Capture Academic Capture Event Notes: Academic Bias Under the Microscope

Event Notes: Academic Bias Under the Microscope

0
erhui1979 via Getty Images

That scholarship often reflects conscious and unconscious biases has long been an open secret in academia. On April 22, Professors Christian Leuz, Anat Admati, and Tommaso Valletti discussed the sources of industry bias in economic and business research and possible avenues of mitigation in a panel moderated by John Barrios.


An open secret pervades academia: research studies often reflect various types of biases that compromise their methodological integrity and trouble their results. There are many forms of bias, said Christian Leuz, a business professor at the University of Chicago’s Booth School of Business, to start off April’s “De-Biasing Academic Research” panel. The panel was part of the Stigler Center’s 2022 Antitrust and Competition Conference and also featured Professors Anat Admati of Stanford University and Tommaso Valletti of Imperial College London. Professor John Barrios of Washington University in St. Louis moderated.  

There is publication bias, or the reluctance to publish studies that show insignificant or null results, continued Leuz. Researchers may not be vigilant about reviewing data or coding for errors if initial results validate their hypotheses. There is what statistician Andrew Gelman calls the “garden of forking paths,” which underlines that researchers make choices in how they analyze and report data. For example, there may be multiple methodologically sound analytical tests available to the study, but if a researcher only chooses a test that they think will produce significant results, or runs multiple tests but only reports those that produce significant results, this increases the study’s biasedness.

Leuz pointed out that these biases can be both conscious and unconscious, which is no less true for the panel’s primary focus: industry bias. Leuz cited several meta studies that show business-sponsored research (e.g. research done with business grants or using proprietary corporate data that comes with restrictions) often report results that are financially favorable for the corporate sponsor. “Industry sponsors studies tend to have a bigger sort of disconnect or less agreement between the results and the conclusions,” Leuz said, which suggests the biases are occurring not in the tests but in the interpretations of their results. 

And, at least for the corporate sponsors, these results don’t even need to be conclusive or groundbreaking. Citing one case study on the tobacco industry, Valletti said that the goal of corporate research sponsorship is simply to sell doubt. “Because that’s what you need in policy…if you say we don’t know enough [such as about the harmful effects of tobacco use], you’re not going to go anywhere.” 

In economics, and much academic research, scholars often conclude their papers claiming that further research is required, whether out of humility or the desire to encourage the further production of research that will cite their paper. But in policy, Valleti continued, “that’s a death sentence…because someone is going to read that line, ‘more research is needed, we don’t know enough,’ and then it stops the entire conversation.”

Biases crop up elsewhere. Quoting Luigi Zigales, a professor at the University of Chicago Booth School of Business and the director of the Stigler Center, Valletti cited the Uber Problem, where a company will only release their data for research if the final paper portrays the company favorably. The company may even be able to veto the publication of the paper, said Valletti. 

A researcher, particularly a young researcher who needs to explore a new and exciting set of data to launch their career, may nevertheless use this data and produce methodologically sound results. But the overall literature will be biased to positive results and the negative results will be suppressed. Similarly, some companies will not release their data at all, and thus be hidden from ongoing discussions.  

Perhaps on a more theoretical level, Admati said, the very assumptions that undergird economics, the assumptions learned in introductory and intermediate economics courses, contribute to biases that favor industry. Admati admonished the field for its unwillingess to investigate these assumptions, particularly when they help to advance financial and technological innovations that harm consumers. 

“‘academics are the problem…with such friends,’ he said, ‘who needs lobbyists?'”

These assumptions, promoted through clever research, influence how policy is subsequently developed, said Admati. Admati relayed a short anecdote about someone who does compliance for a big bank who told her “‘academics are the problem…with such friends,’ he said, ‘who needs lobbyists?'”

Barrios pushed the panel to explain how academics might start thinking about solutions to industry biases in research.

Leuz discussed how medical research is making efforts to aggregate results so that lobbyists and businesses cannot cherry-pick results from a single study if other studies lead to different conclusions. Some journals also refuse to publish work using private data. 

Business and economics journals have yet to catch up, though. Both Valletti and Leuz mentioned that academic journals in these fields often enforce a certain conservatism in what kinds of papers they publish, including the kinds of methodologies, data, and interpretations that may perpetuate certain biases. If researchers, particularly junior academics, want to publish, they often need to preserve a status quo. 

At the same time, as Valletti said, these journals also don’t want to publish papers that confirm a model that is already known but may be beneficial to policy-making. Rather, the journals’ editors are interested in the exception. “If you had to advise a student now [who] found this cool data of a merger to see if prices went up after the merger, you would stop this person most likely.” The rise in prices due to industry consolidation is a model we have all seen before. But if journals only publish the exception to these models, then it casts doubt on the pervasiveness of an issue, as Admai and Valletti warned previously.

Finally, Admati and Leuz mentioned the need to acquire better data, which not only provides a more comprehensive insight into the effect of, say, mergers, but gives academics the data they need to conduct interesting research unencumbered by possible conflicts of interest.

Valletti agreed and mentioned that the “FCC has the power to require some data to be disclosed,” which has allowed research into telecommunications. The same is true for the Department of Transportation and airlines, but there is no such regulation in the digital sphere. 

Discussions on the varieties of biases that taint economics and business research is perhaps only beginning, but that it is now openly debated is an encouraging first step. In the end, as Gelman’s work implies, it may be impossible to abolish biases completely. 

Yet, reevaluating what makes for important research and offering transparency about conflicts of interest and data usage (and on the flip-side, having industry be transparent with their data), offers a concrete path to minimizing the influence of industry as a source of bias in academic research.