The following is a transcript of John Ioannidis’s keynote address at the 2025 Stigler Center Antitrust and Competition Conference—Economic Concentration and the Marketplace of Ideas.
Stefano Feltri
Hi, welcome back. My name is Stefano. I’m a journalist. In terms of disclosure, my salary is partially paid by Bocconi University. I also work for the Italian public radio, and I have my own Substack.
As you heard this morning, economists are not used to tackling the problem of conflict of interest in their own field. The paper that Tommaso Valletti presented is something relatively new, an exception rather than the rule. They use in that paper some tools that come from an extensive literature in medical and pharmaceutical research, where there is more experience in studying how conflicts of interest shape the findings and how much we can trust different kinds of research based on how conflicted they are.
We will discuss this issue with John Ioannidis, who is probably the most well-known person in this field because he has basically challenged everything. One of his papers is titled “Why Most Published Research Findings are False,” and it has its own Wikipedia entry only for the paper, not for the author, which is pretty remarkable. He also challenged the consensus during the pandemic that lockdown was the appropriate policy, and actually, apparently it wasn’t; we will discuss that in the details later on.
John is Professor of Medicine at Stanford Prevention Research Center; of Epidemiology and Population Health; and also of Biomedical Data Science. On his website, he says that his aim is “to improve research methods and practices and to enhance approaches to integrating information and generating reliable evidence. Science is the best thing that can happen to humans, but doing research is like swimming in an ocean at night. Science thrives in darkness.”
With no further ado, I give the floor to John. I have to remind you to start with a disclosure of your potential conflicts of interest, and then let’s start with the presentation.
John Ioannidis
Thank you for this wonderful invitation. I have learned a lot attending this conference, all the informative and wise presentations and panel discussions.
My disclosures. I do research on bias, so I suspect I must be very biased and you should be very careful about everything that I might say. My salary support is from Stanford University; it includes federal government grants.
I have no funding or other support from for-profit entities. I have no personal social media; this is by choice. I explained on my personal site at Stanford that I fully respect those who are very wise in putting their thoughts in social media. Personally, I make a lot of mistakes. I have to think a lot about what I say and revisit it before I publish, and I don’t want to use social media to make a fool of myself more frequently than it is absolutely unavoidable.
Also, I have to say because I think it’s very relevant to what I’m going to discuss and to this conference, I have no political party affiliation, membership, link, or whatever. I have been attacked by extreme leftwing, but this doesn’t mean that I’m extreme right-wing; I have been attacked by extreme rightwing, but this doesn’t mean that I’m extreme leftwing; I have been attacked by extreme moderates, but this doesn’t mean that I’m anything.
I’m just grateful for receiving criticism, no matter where it comes from.
Evidence-based medicine has been a catchword for many decades in medicine. We say that we want the best evidence and then to communicate, in an unbiased way, that evidence to whoever is affected—it could be patients, it could be communities, it could be the whole world when it comes to public health—and have some shared decision-making in using that evidence.
You have probably heard about pyramids of evidence that have the most rigorous evidence at the top. For example, for interventions, multiple randomized, integrated into systematic reviews and meta-analysis; and then lower down, single randomized trials, and then observational evidence, and so forth.
The truth is that evidence, these pyramids—even though they’re the feats of science, and they’re remarkable, and we should be very proud of what science has accomplished—they are put in the middle of a huge desert. And the desert is much bigger than the pyramids. The desert includes typically, among other things, conflicted experts, media, social media influencers, and many other players who may be more important than these gigantic pyramids of evidence in science.
Several years ago I wrote a commentary to honor my late mentor David Sackett, saying that evidence-based medicine has been hijacked by all of these conflicted stakeholders. I admitted that I’m a complete failure because, even though I work in that field, I could not do really much—perhaps I could do nothing—against that onslaught of conflict and bias that has hijacked “evidence-based medicine.”
When people started using that term in 1990 or shortly thereafter, the key tools of non-evidence-based medicine that had been splendidly serving vested interest included: ex cathedra pronouncements (like this one) by opinion leaders in various conferences; editorials; non-systematic reviews; professional society guidelines done for the glory of the profession; pamphlets from drug reps; and other marketing material disseminated in medical, quote, unquote, “scientific” meetings that have more marketing rather than science.
Well, in 2025, the key tools of non-evidence-based medicine include conflicted stakeholders on steroids: big pharma, big food, big tobacco, big tech, big anything, media, social media, politics, conspiracy theories, “infodemics.”
We also have the evolution of everything we had before. We have guidelines that are based on seemingly rigorous but still mostly expert opinion-based approaches. We have lots of systematic reviews and meta-analysis, but most of them are very poorly done. We have about a million randomized trials, but most of them are not relevant to the questions that we want to ask and answer. Lots of observational studies, many millions and tens of millions of them (just meet bias, the works!). And also the advent of precision medicine and precision health, which is mostly precision dreams: extremely costly, extremely expensive, extremely susceptible to biases and conflicts.
Evidence is less than optimal. Most of the time we recognize that we don’t have enough as we would wish. This slide is just one snapshot of an empirical evaluation that we did several years ago. We have repeated it with pretty much similar results. Looking at 1,400 systematic reviews on different topics across all medicine and health, if you ask for high quality of evidence, with both statistically significant results and a favorable interpretation of the intervention, only 25 had both. This is less than 2% of the questions that one could ask, that one wants to get relevant answers to.
When you have such limited evidence, it means that the interpretation of the evidence unfortunately (or fortunately?) is more important than the evidence itself. Depending on who interprets and how they would interpret the evidence, it would make a difference. Here you start seeing the impact of bias.
First of all, even the evidence that becomes available may be subject to enormous biases. This slide is a “meta-meta-meta-assessment” that I did with Daniele Fanelli and Rodrigo Costas. We took meta-analyses, and then we took meta-analysis of meta-analyses, across not just one scientific field but every scientific field that we could find meta-analyses on. This covers not just medicine, but any other science.
We tried to see—when we have the benefit of having more than one study, we have multiple studies on the same question, so you can compare notes between different studies—whether that pattern of results across and between these different studies is compatible with some biases influencing the evidence. We were able to ask that question across eighteen different types of biases. We found these biases in most scientific fields. But we cannot generalize and say that, well, if you have found this bias in a field of medicine, you can take that to economics, or you can take something from economics and apply it to astrophysics.
If you divide science, for example, into three major bins—physical sciences, biological/biomedical sciences, and social sciences—the bias of small study effects, meaning, the small studies that we do give very exaggerated effects estimates, is very prominent in the social sciences. That includes economics; we’ve done more in-depth work on that, suggesting that the effect sizes in economics, on average, are about four-times inflated, compared to what might be the truth. They’re pretty prominent also in biosciences. They don’t exist, on average, in most physical sciences. Physical sciences, like astrophysics, they have telescopes, they have such massive data that their problem is not small data sets, as we face in many biomedical problems or social science problems. They have more data than one might need.
So, studying biases is a good start. But one cannot extrapolate across all fields.
What is a common denominator in many situations is that the pyramids of evidence are not just not existent, but they’re bulldozed by people who want to destroy them. That could be because of money; because of allegiance; or because of zealotry, advocacy, activism, you know, just believing that, “What I’m standing for is the truth and the future of mankind.”
We know that almost any result can be obtained. There’s been a lot of effort, in both social sciences and biomedical sciences, to try to restrict options through pre-registration.
Even with pre-registration, there’s still a lot of options. Here’s one example, where I’m analyzing the National Household Survey. It’s a very reliable database, capturing a representative sampling of the US population in 1 million different ways. How do I do this? Well, if you look for the association between different factors with mortality, you need to adjust for other things that affect mortality. So, if you adjust for 20 things or not adjust for them, you have two raised to the 20th power options. This is 1 million points.
For example, if you take alpha-tocopherol vitamin E levels (which is the far right panel on the slide), if you decide to adjust or not adjust, you will have 700,000 analyses that suggest that vitamin E decreased the risk of death, and 300,000 analyses that suggest that it increased the risk of death. Some of them will be significant, some of them will be highly significant, some of them will be non-significant. It depends on what you want to show. So, if there’s even a little bit of bias, people will report what they want to hear.
Statisticians are under pressure to get results that investigators like to see. Well, first of all, statisticians and methodologists are involved in a tiny proportion of all the research that is being published; most research is done in a methodological vacuum. But even when they’re employed, this survey for example shows that much of the time they’re under pressure to get the results that the investigator wants to see.
Or, maybe it’s not the results that the investigator wants to see, but it could be the results that the sponsor wants to see.
The sponsor does make a difference, at multiple levels. This slide is a review that I published with Emmanuel Stamatakis and Richard Weiler a few years ago about undue industry influences that distort research, strategy, expenditure, and practice. It’s not a single target; there’s distortion at multiple levels that can happen. From very early on, thinking about what the questions are that should be asked, how they should be asked, who should ask them, what kind of results should be disseminated, and then how they should be interpreted, how they should reach clinical practice or public health, how they should get into guidelines, and many, many other layers.
Industry is controlling the vast majority of influential medical research. This slide is one analysis that we published recently, looking at involvement and transparency from industry in the most-cited clinical trials across all medicine from 2019 to 2022. Industry sponsored about two-thirds of these trials. About half of them were sponsored entirely, with no other sponsor, by industry. In the past, industry did not wish to have author affiliations because they said, “No, we’ll just be hiding; let’s have the Harvard or Stanford professor be the front person.” Now they’re doing that even, getting authorship positions. They have the analysts who will analyze the data. They will set the rules of what is acceptable and what is influential. And they will find their way into the most influential journals, like New England Journal of Medicine and The Lancet. These journals are living from publishing these industry trials, practically.
Some types of clinical trials almost always favor the sponsor. This slide is one analysis for a single year that we looked at the largest non-inferiority trials that had head-to-head comparisons in which one of the agents being compared, one of the drugs, was sponsored: 96.5% success rate for the sponsor. I would argue, why are we doing this trial? Let’s save some money. Let’s skip that step; we can just say that the sponsor is right.
Why is that? Is it that the research is so horrible? No, the research may be very well done. But the question is asked in a way, and the non-inferiority margin is set in a way, that you’re doomed to get a favorable result for the sponsor, no matter what.
In fact, sponsors have built their little city-states within medical research. This slide is looking at a snapshot of ClinicalTrials.gov, which is registration of trials. I’m showing you what the largest sponsors, the most prolific sponsors of clinical trials do. If you see a blue circle, the size is proportional to the number of trials that they have sponsored in that sample; and if you see a connection between these blue circles, it means that they have decided to do something that they would compete head-to-head, and one might lose. You see very few connections. There’s only one, actually, that’s pretty strong: Sanofi with BMS. Why? Well, because they have co-manufactured clopidogrel, one drug, so it’s not really head-to-head competition; they’re still serving themselves, but they just share something. The thickness of the outer loops are trials where the sponsor is just dealing with their own little—or more than little—portfolio, so the result is going to be what they want to see.
They’re doing research, not for the question that is relevant for medicine or for public health, but for the question that would pertain to their own share of the market. Each of them will try to, not necessarily get a larger share, but just magnify the size of the market so that everyone will get more.
We don’t have available evidence for what matters, such as prevention issues for conditions like pre-diabetes, or preventing heart failure, or preventing heart disease. We would like to see, for example, what exercise does (which, I may be biased, but I think is very effective), yet almost all the agenda is about pharmaceutical interventions; there’s very little about comparing drugs versus exercise. This is something that has been done very frequently; we don’t get the evidence that we would like, to be able to give strong recommendations on some things that may be warranting strong recommendations.
Most of the way that the research agenda is manipulated is not visible. You just see the end product, the final publication in New England Journal of Medicine. What is not visible is the inner discussions within companies on how they want to orchestrate so that they will manipulate the overall agenda and the overall evidence.
This has been seen in cases where we have internal industry documents that have been released as part of litigation. For example, this slide is a very nice paper showing the promotion of gabapentin. Practically a drug that spread its uses across everything that you can imagine; almost any indication would be fine for gabapentin. The level of intervention was, in all possible ways, to subvert the evidence: hide information about adverse events or harms, and promote a picture of effectiveness that actually was not congruent with the real evidence.
Big pharma is a latecomer in that. There’s even better masters in manipulating evidence and what is being disseminated to the public. Big tobacco is the most prominent and the one that had utilized the best experts, lawyers, communication science influencers. Even until recently, probably they’re still the champions. I had some of the worst trouble myself against them, rather than with big pharma. We know that they can infiltrate evidence and information and communication at all levels. They can do that at the level of scientific evidence. They can do that at the level of influencers. They can do that at the level of media and social media.
There’s many metamethodological studies. This slide is a good summary published by Cochrane on industry sponsorship and research outcome. The conclusion is that when there is research sponsorship by the industry, on average there’s substantial inflation of the results in favor of the sponsor.
When you look at whether there’s particular study design issues that might explain this, you cannot find much. Maybe you cannot find anything. These studies seem to be well done. When I started doing research, some of the industry studies had very obvious flaws. Now they know that they’re spending lots of money to do this research to convince people. They know that there are people like me, who create these checklists of bias, so they say, “Okay, this loser John Ioannidis has this list of 10 items. We’ll pay $50 more (compared to the half a million that we’re investing) to make sure that we satisfy his little petty checklist, and that’s going to be fine.”
Still, there’s other biases that are not easy to capture. You need to come up with a virus and antivirus type of situation, where you have new viruses and new Trojan horses that you need to respond to. How do we respond to them?
We respond with systematic reviews and meta-analysis. We try to integrate evidence and look at it carefully and see how much of that is good and how much of that is bad and weak. We have succeeded in having more than 100,000 meta-analyses and several hundreds of thousands of systematic reviews.
Unfortunately, they can also be subverted in the same way. Meta-analysis can become yet another marketing tool. For example, this slide is looking at six years of meta-analysis on anti-depressants. We had 185 meta-analyses (if you take the whole time period until now, probably we have more than a thousand meta-analyses on anti-depressants). Only 31% of those had any negative statement in the abstract about anti-depressants. This doesn’t mean that they would have just negative statements. They might have had ten positive statements and one negative statement; that would count. So only 31%. When there was an author who’s an employee of the manufacturer of the assessed drugs, then these meta-analyses were twenty-two times less likely to have any negative statement about the drug than other meta-analyses.
Actually, only one case was a meta-analysis with an industry employee with some negative statement. That person doesn’t work with the industry any longer.
Meta-analysis can be done in many ways. This slide is one effort that I put together with my colleague Florian Naudet and his team. We could get more than 9,000 ways to do a meta-analysis on randomized trial. So, this is the highest level in the pyramid! And, depending on how you would do it, the graph shows the cloud of results that you would get from these meta-analyses. Some of them would be on one side of the null, others would be on the other side. In terms of significance, you could also do it whichever way you want.
So, not just observational studies, but also meta-analysis could have the same problem.
I also tried to see how many meta-analyses are both useful and decently done. My conclusion is, it’s about 3%. Good luck with identifying that 3%; I think even people who are experts in evidence have a very hard time navigating all that waste.
One level beyond meta-analysis, we have guidelines. You take single studies and multiple studies, and there are synthesis and systematic reviews, and you have guidelines. They are trying to tell us, what are we going to do? Are we going to use these interventions? Are we going to do that testing? Are we going to apply these measures?
Unfortunately, about 85% of guidelines are utterly unreliable. About 15% are not utterly unreliable, they’re just unreliable partly, and maybe you can use some of that. Here is a list of red flags: The sponsor is a professional society that receives substantial industry funding. Sponsor is a proprietary company, or is undeclared or hidden. Committee chairs have financial conflicts. Multiple panel members have financial conflicts. Suggestions of committee stacking, which would pre-ordain a recommendation regarding controversial topics. No or limited involvement of an expert in methodology in the evaluation of evidence. No external review. No inclusion of non-physician experts, patient representatives, or community stakeholders.
About 85% to 90% fail because of multiple reasons. We have empirical evidence. When we compare guidelines, advisory committee reports, opinion pieces, and narrative reviews in terms of whether conflicts of interest shape their conclusions, they do. They would affect the chances that the conclusions would be favorable for the sponsor.
I want to spend a couple extra minutes on stacking because, now that financial conflicts of interest have become very visible, many guideline developers have said, “Okay, we’ll try to take care of that: You cannot get any funding from industry for two years. Well, you can get $50 million after that. But, you know, for two years you shouldn’t get any.”
Stacking means that we have consensus statements without systematic evidence that already carry high risk of bias. Panel members would operate largely based on opinion. Then, you select many panel members to participate because of their strong views, preferences, or allegiances that are independent of evidence. Undisclosed conflicts of interest, including advocacy and activism, can exacerbate the misleading impact of stacking.
I think that we need to do something about it.
Here’s one example about Covid-19. Nature published some guidelines. Now, Nature publishing guidelines on public health, I don’t know, it’s a little bit like the Journal of Clinical Investigation publishing on astrophysics. But they did, and they got together a panel of people who were supposedly experts. The common denominator was that they pretty much came from advocacy and activist groups that believed in Zero-COVID, you know, that SARS-CoV-2 has to be entirely eliminated. The scientific value of that approach is about the same as flat earth. Nevertheless, this slide shows the conflicts of interest, with red color belonging in activist groups for these members, and with yellow color you have the ones that are disclosed actually in the paper. So, about 1% of the conflicts is being disclosed.
Another paper, in BMJ. BMJ is the premier journal in evidence-based medicine. I have published a very large number of papers there; I esteem what they do. We tried to see what happened during the pandemic. We had an analysis that looked at the advocates of Zero-COVID; at the advocates of focused restrictions (declared advocates, so, opposing perspectives); at the most highly cited scientists in epidemiology; and at the most prolific scientists on covid-19 across the scientific literature. Advocates of Zero-COVID had sixty-four times more papers published in BMJ compared to advocates of the opposite position; and they had sixteen times more papers published compared to the most highly cited scientists and compared to those who were most prolific in the covid-19 literature. Almost everything that they published was pieces that went through no peer review. So, you have the best evidence-based journal completely hijacked by advocacy—by a flat earth theory equivalent, practically—for a number of years.
Here’s another example. Editorials about hormone replacement therapy.
“Hormone replacement therapy” is a misnomer. How did that arise? Well, because there was pressure to convince every woman that they need to be on hormones. You need to replace your hormones. Evidence does not suggest that. Women’s Health Initiative and other studies suggest that that’s not a good idea. However, we found multiple journals that had support—for their existence or for their conferences—directly from manufacturers of hormones, who published massively, dozens and hundreds of editorials, by editorialists who were also conflicted by the same sponsors in favor of hormone replacement therapy.
Cardiovascular disease. Several years ago I wrote a piece saying that professional societies should abstain from authorship of guidelines and disease definition statements.
If you take cardiology, eight of the fifteen most-cited papers—across not just cardiology but across medicine—in a single year are cardiology guidelines. There is a big specialty. Cardiovascular disease is very important. Number one cause of death. Half of the most-cited papers are cardiology guidelines. They have managed to increase the impact factor of some of their journals, like European Heart Journal, by about tenfold based on publishing these pieces.
These are societies that have extremely strong industry support. When I looked at that—and the numbers may be different now, but I think probably they’re worse—more than $200 million per year to American Heart Association, and 77% of the European Society of Cardiology budget. This slide shows the most-cited papers of the most-cited cardiologists. They include 164 guidelines like the ones that I just discussed, 24 industry trials, and 11 other articles, three of them partially industry funded.
So cardiology is all taken up by industry. I don’t know if any survivors exist who are not conflicted with industry. The counterargument is that when you look at these cardiology stars, their list of conflicts of interest is about five pages long. So they say, “We work with everyone. We cannot be conflicted since we work with everyone. We take money from everyone. That’s not a problem. We’re the best.”
Industry can also affect even what evidence to get. This slide is some effort that we did to look at how EMA and FDA are asking for evidence to be generated, so that new medications will be approved or for new indications. We tried to see in that guideline and development—on how to do trials, what to look at, what are the outcomes, what are the processes, and what is important—who is running the show. Well, industry is running the show on how the show is to be run for the industry, by the industry, on the industry. These documents, about 80% of the comments and about 80% plus of the commentators, are industry commenting on what should be the rules about how my drug will be licensed.
Reanalysis. Do we trust the data sets that stand behind these very influential pieces of evidence of clinical trials? I’m not sure; this slide is one example. BMJ publishing a reanalysis of Study 329 (it sounds a little bit like a submarine, U329, but it’s a clinical trial) on paroxetine and imipramine for major depression in adolescence. Fifteen years after SmithKline Beecham published that work—suggesting that imipramine and paroxetine are both very effective and safe—a reanalysis by independent investigators, who managed after fifteen years to get hold of the data, showed that they’re not effective and they’re not safe. In the meanwhile, we had suicidal ideation and suicides in children and adolescents who are on anti-depressants, and it took fifteen years to rerun the analysis.
Along with Tom Hardwicke, we have launched a project where we tried to get hold of the most influential papers, in terms of citation counts, across every single scientific field. These are the data, for example, for psychology and psychiatry. Some of the authors who we approached, saying, “You have run one of the most influential studies in the history of your field. We want to save that legacy. Send us the data. We will curate them. We will make them available for everyone to use. No cost to you. We don’t ask for any money,” some did send the data sets. Most did not. The most common reason why they did not was that the data sets were outside of the researchers’ control. Perhaps the researchers had never seen the data. The principal investigator had never seen the data (I hope they had seen the manuscript; I’m not so sure about that). Industry sponsors or other entities control the data, they wrote the paper usually, and maybe the principal investigators put a few commas here and there. But they don’t control the science that they reported.
Final situation. We’ve seen questionable data, conflicted data, biased data, well, how about no data?
Some entrepreneurs think that science and the scientific method is obsolete. Or, research should be done in stealth mode; we shouldn’t disclose anything. Just create the best product, move it forward, save the world, save the economy, whatever, and forget about science. This is a joke, pretty much.
Theranos had that perspective. We work in stealth mode, we’re proud of it, we don’t publish anything but we know how to do it. I wrote the first paper against Theranos about a year before the Wall Street Journal exposé, and it went through editor review, peer review, and legal review. It took a few months, but it came out in early 2025, arguing that this is a big problem and I don’t know if, really, their valuation should be $9 billion or just $9. I got a little bit of legal counsel calls from them. Not that bad; I’ve had worse. But, as you know, Theranos collapsed, and now it’s not worth $9. It’s worth zero dollars, and everyone is in jail among their leadership.
Theranos is not a single bad apple. We’ve looked at unicorns across healthcare. About half of them have the same profile of not publishing evidence. We don’t know, scientifically, whether what they do is really worth these billions of dollars. We’ve seen it even in the best scientific fields, like immunology. That’s another analysis we did, in Nature Immunology. Pretty much, the companies that had the least visible evidence had the highest valuation. If you publish less, you get a higher value for what you do.
I don’t want to leave you with complete despair because, over the years, many people have been sensitized. There’s lots of efforts to try to emend the situation.
This slide is a list of twelve families of practices that one can go after to try to improve the credibility of the research evidence: Large scale teamwork. Adoption of a replication culture. Registration. Sharing. Reproducibility checks. There’s ways to contain conflicted sponsors and authors, some more efficient than others. More appropriate statistics. Standardizing definitions and analysis. More stringent thresholds for claiming discoveries or successes. Improving study design standards. Improving in peer review, reporting, and dissemination of research. Better training and better literacy and numeracy in statistics.
This is what I do in my team; meta-research, research on research, trying to improve research practices. Are we successful? I’m not so sure. I think that some movement has happened, but I think that there’s still lots to be done.
To conclude, evidence-based medicine is still alive, but not well. It’s key tools—randomized trials, systematic reviews, and guidelines—can function well, but often they acquire dystopian features and get hijacked by conflicted stakeholders. There are many potential improvements in the evidence base and how it can be protected from bias. Methods matter, research practices matter, incentives matter, ethics matter. And I’m eager to hear what you have to say to help that situation. Thank you so much.
[applause]
Stefano Feltri
Thank you, John, for this brilliant presentation. I have a couple of questions before opening the floor for Q&A, and I think we still have time.
As we heard this morning in the paper about the economist with conflicts of interest, people reading a paper in which the authors are conflicted, apparently, they know they are conflicted. What is the perception of this kind of research in the field? If everybody knows that some science is actually paid by corporations, even written by the industry, how do they approach this paper? Why do they make all these efforts to produce such an amount of information that, actually, probably nobody truly trusts?
John Ioannidis
Well, I’m not sure that it’s true that nobody trusts it.
It does make a difference. It does convince regulators. It does convince the top journals to publish it and to depend on revenue from publishing these papers. It does convince academic centers to recruit researchers who are the most heavily conflicted into their leadership positions. There’s lots of people who, in one way or another…
Stefano Feltri
But are they aware of all these problems? And they pretend that…
John Ioannidis
I think that they’re partly aware. But I think that there can always be some self-justification, that, “Goodness, this is the real world.” And I also have to admit that. I would never argue that we should just keep the industry away from biomedical science, you know? Innovation requires entrepreneurship and a private sector as well.
I think that what is happening is that there’s segmentation of trust. There’s lots of people who say, “I’ve lost trust, and I have some very visible examples on why I should lose trust because I have been fooled, I have been cheated, and I see that this is ridiculous.” And you have other stakeholders who are saying, “We need to be pragmatist. This is not going to change overnight. We’ll just kick the can a little bit further, and continue doing what we do, and maybe try to have a little bit of incremental improvements here and there.”
There are things that have been improved. In some of those, the industry has taken the lead to improve them. For example, we now have open payments disclosures, about who’s getting specific payments from big pharma among physicians. The industry was influential to do this. I’m thinking, “Why?” Well, because now physicians cannot really ask to have their whole family sent to Thailand to a resort because, you know, that would be visible.
Stefano Feltri
Cartel to minimize the market power of…
John Ioannidis
They’re saving money by being transparent. I think that there’s other situations where there can be win-win kind of approaches, that you respect the wish of the industry to generate new interventions and new products and, at the same time, find the optimal place for them with less impact of the conflicts in the process.
Stefano Feltri
Very interesting.
I would like to ask you to comment a little bit more about the covid moment because it’s probably where the first part of this conference interacts with what you said, that the platform shaped the conversation.
We know from Twitter files—now that Elon Musk opened the archive of Twitter—that Twitter was shadow banning people challenging the idea that lockdown was the optimal policy. We know that the Biden administration made some pressure. How do you see the interaction between this problem that you mentioned and the problem that some people can trust this kind of findings too much and pretend that even raising doubts is a problem so the conversation should be in very narrow boundaries?
John Ioannidis
Covid-19 was a special case because we saw all of these problems that had been documented in the past appear in a very acute setting. In a setting of an acute crisis of global dimensions with tremendous potential of harm. There was a lot of warfare, in a sense, in the creation and dissemination of information. I have to say, I could never believe that something like that would have happened. I feel like I was a complete idiot, completely unprepared, you know, interested in my little math and methods and biases, while all of that beast—of information, misinformation, disinformation, not even knowing who was the myth and who was the non-myth—was happening.
My only conclusion out of this was: If you have five minutes of spare time, give these five minutes to someone who thinks differently than you do and allow them to express their opinion, present their data, and say why they have an opposite opinion. If they’re right, they will be my benefactor because they have proved me wrong. If they’re wrong, five minutes will be enough to show that what they say is completely bogus (or, the more time you give them, the more clear it is.)
It’s a shame because we created that “follow the science” cloud and, goodness, we were wrong in so many issues that “follow the science” now to many people has a very negative connotation. And it’s something that I wanted to be able to tell people. We want more people to believe that science is a good thing and it’s helping humanity. Instead, now I have to say… What am I going to say to these people, you know? “I was such a fool in trying to promote this or that idea, and we failed so miserably…”? So it was a major setback, unfortunately.
Stefano Feltri
One of the problems that the journalists faced in those years was that we wanted to be correct and to be careful in presenting scientific evidence. But at the end of the day, what we could do was usually to ask an expert to present his or her opinion. I think that is something that made the problem worse, probably.
How can we fix this? We heard in other panels that media has their own problems. Now there is this rise of news influencers that have their own business model, they have their own audience, and they can have their own experts. How can we reconcile these changes in the information landscape with what you are saying, that basically we need some form of very uniform procedure to decide what is trustworthy and what is not? How can we square all this?
John Ioannidis
Evidence is not about experts. In fact, there’s a major opposition between evidence-based medicine and expert-based approaches. In the fundamental debates that happened even decades ago, it started with evidence-based medicine being a reaction to expert-based medicine. There has been some reapproaching between these.
The problem that we have now is that experts are using the term “evidence-based medicine” to hide their non-evidence-based expertise.
We saw that during the pandemic very prolifically happening. Most of the experts were not really experts in what they were talking about; I think that they were more opportunistic experts that had a very high wish for public visibility, and they grasped the opportunity. They had the right connections with journalists, with influencers, with government, with other influential players, big tech, and they became the experts. If you look at their credentials, most of them were not experts. Even if they were experts in what they were talking about, an expert should just give way to evidence and say, “I don’t know,” or, “There’s uncertainty here,” or, “You’d better look at that area and these studies; that’s not really my area of expertise.” We saw very little of that, or none of that. I think anyone who was coined an expert just jumped to grasp that crown of expertise and played along as, “I know everything,” and, you know, “Ask me more, I can tell you more.” Well, we knew less and less and less.
Stefano Feltri
Now that the Covid is over, luckily, where do you see the same problem? The first thing that comes to my mind is the obesity drugs debate; I read in the Financial Times that the chief scientists of the Novo Nordisk Foundation advise Nestle and other big companies that basically create the problem that then Novo Nordisk try to solve. Where do you see these problems now that the pandemic is over?
John Ioannidis
Every day is different.
There’s some big markets within medicine, and public health, and personalized treatments. The way that people want to use interventions, or devices, or diagnostics, or technologies.
The bottom line is that about one-fifth of our GDP in this country is spent on healthcare, and it’s not efficient. Our life expectancy did not go up for many years now; it even went down. We’re losing ground compared to most other countries that are at the same, or even a bit less, financially privileged situation. It is also a global problem that probably we spend a lot on things that do not work.
I don’t want to be a pessimist and nihilist. There are things that work, things that have made a big difference. And there are also some very easy targets. Like, abolishing the tobacco industry should be an easy target; we’re talking about 10 million deaths every year, most of them young and middle-aged people.
So there are things that work, there are easy targets, and there are lots of things that we can add to cost that could escalate beyond any control. I don’t know if we will destroy our economy based on obesity drugs, based on Alzheimer’s drugs, based on whatever. Some of those might work, and we have to see exactly how much they work. We need some more independent and un-conflicted cost-benefit analysis. Most of the cost-benefit analyses have all these problems that I mentioned about guidelines, sometimes even more, so they have obvious conflicts of interest.
We need to have a public-good perspective—because eventually it affects everyone, not just a single patient—on what is acceptable, what is promoted, what is endorsed, what is made an enthusiastic case of.
Stefano Feltri
Okay, thank you. I think we can open the floor to questions. Let’s start with one over there.
Audience Member 1
Really important work, glad to see you share it in this audience and kind of confirm some stuff that I’ve thought for a long time.
You had a list of prospective solutions. One that I didn’t see on there that I was a little bit surprised by and I’d like to hear your thoughts on was the idea of adversarial collaborations. I think that’s actually particularly important in an antitrust context now because courts are moving often now to something called, like, an expert hot tub. They’ll get experts who have written different reports from different sides, and then the judge will bring them together at the same time and ask similar questions or will define the questions in a certain way. That sort of mirrors the idea of an adversarial collaboration in an academic setting.
John Ioannidis
I think that’s a very good thing to add to that list. I have to think about which fields and what types of situations and studies and approvals it would make more sense of. There’s some situations where you don’t have any adversaries left! They have all moved out because they see that, “Goodness, this is not fertile ground for me to be in.” But in principle, yes. I think it’s a very good idea.
Stefano Feltri
Paul?
Audience Member 2
Thank you very much for this.
I wonder if you’ve looked at the topic we’ve been discussing today, the tech industry funding economic studies. I’m going to put out something that I’m sure is wrong and naïve, and I want you to tell me why.
I would think one thing you could say about pharma is, every so often, their business model aligns with human wellness and flourishing. With tech companies, it’s either orthogonal or actually the opposite, where they make money when people suffer more. Would you expect the pharma effect, maybe once in a while like a broken clock, to do the right thing? And should we be more skeptical about the tech industry?
John Ioannidis
I think that you have a continuum. Big Pharma, indeed, can have goals that serve humanity. Big Tech may or may not; Big Tobacco, never; and other “big x,” probably close to never; you know, Big Food, probably close to never. So, there’s a continuum.
I think that it goes way beyond my expertise, whatever non-existent expertise I may have, to try to think about how to reduce the power of players who are, let’s say, orthogonal or sometimes very detrimental for humanity and public good. Or how, perhaps, to incentivize them to do things that may be more in favor of common good outcomes.
Audience Member 3
I think that, really, a theme across these industries is the way to control narratives, to say, you know, don’t we love our bank, don’t you love your bank account? I was in Netherlands recently and it was like, oh, don’t you take a mortgage? Like, as if this has something to do with… You know, you can’t ban food, we do need banks, but the narrative that’s supposed to justify what they do. Maybe the key word is skepticism when people with interest say stuff. Certainly that’s true for pharma. They’ll say, this is better than this. But is it better then something else they forgot to say? Or how they framed their experiment this particular way, or where they cut the side effects, or whatever it is. Maybe putting it in a broader context is to say, don’t believe. Distrust, verify. Or at least don’t trust right away.
John Ioannidis
I agree. We need some recalibration of skepticism. At the same time, there are segments of society that have become over-skeptical. To some extent, maybe it’s not their fault. Maybe it is our failures that have boosted their skepticism. Calibrating the right amount of skepticism is important. It’s not a fixed argue because it needs to be evaluated on a constant basis. Trying to understand who believes what, and why.
Audience Member 4
Much of your presentation was focused on talking about evidence-based versus expert-based opinions within the space of medicine, which is really interesting. I wonder if you believe that there is still space for expert-based opinions, especially in fields which are under-funded in the research side, specifically like women’s health conditions. A trial recently came out that proved that BV can be prevented through the male partners’ treatments with antibiotics, which is something that has been thought to be a consensus among doctors who could be considered experts for years. It just hadn’t been researched because of underfunding in specific fields of medicine.
John Ioannidis
Since we’re missing, sometimes, 98% of the optimal evidence, this gap needs to be covered by experts one way or another.
I think it’s important, though, to make it very clear, very visible, and admit that, “Goodness, we have no evidence, or we have very poor evidence, or very weak evidence. This is opinion. You can take it, but opinion means that it has very high uncertainty, as opposed to these questions where we have strong evidence and we can put more faith into.”
That calibration is missing at the moment. We see, very often, gaps filled by expertise that people do not recognize that this is just opinion, “This is just John Ioannidis mumbling, and he may be completely wrong.” This has to be more clear, I believe.
Stefano Feltri
They tell me that we have time only for one question.
Audience Member 5
An observation for the hive mind and a question. The 1890 act, the Sherman Act, we have called since the early 20th century an anti-trust act. I wonder if anyone’s actually ever looked into that. I’ve been trying to. The trust was a positive, affirming, largely east coast and midwestern legal instrument, and then we turned against it. That’s just an observation, something for the break.
The question is, the Trump administration is attacking universities, or cutting funding of universities, almost all in the medical and engineering fields. Could this have some positive outcome if it goes through a reshaping, rethinking, Schumpeterian gale of creative destruction?
John Ioannidis
Less funding, in principle (this is an expert opinion so I may be wrong), means more competition for more limited funds. Many of the problems that we face are because of the need to exaggerate—to outpace, to outperform with exaggeration—competitors because we have limited funds. Just cutting funds I don’t think is going to help. I think it’s going to be detrimental.
More appropriate allocation of funds? I’m all in for that. This has been a quest since I remember myself as a student, or as a fellow, “We spend a lot on administration, but we want it for science, we want it for young scientists, we want it for high-risk ideas, for innovation.” All of that, I think, continues to be very good ideas. The question is, how we do that without getting entangled into more and more politics, and more and more partisanship, and more and more toxicity?
I do worry about science becoming a political battleground. That’s not going to help anyone. The winners and the losers, they’re both going to be losers because science is indispensable. At the same time, it needs independence and optimal efficiency. All of that is good, but it’s not going to be determined by who has the upper hand in politics. I think that that’s going to be the wrong way to go.
Stefano Feltri
We have time for a very final question.
Audience Member 6
Great presentation. As I think about technology, to bring technology into this conversation, the use of LLMs for information treatments, to help you with financial planning, to help you with legal. Given the nature that we have, that the underlying data generation is biased, potentially, and this will then be fed into these LLMs that are then trained, if I were to ask a medical LLM, “What does the evidence tell me about anti-depressants,” given your meta-study, they would say it’s great. Is there ways to, kind of, sound the alarms now before we get all these tech companies saying that, “We can inform consumers because it’s not about the money. It’s just informing consumers.” Have you thought about that? Have you thought about how to get consumers more aware of this?
John Ioannidis
LLMs are a new tool, and I’m happy to have new tools. I want to see what they do, what they perform well on and what they do not perform well on, and then try to place them in an optimal position in what we do in research, in disseminating information. As you say, if you feed them with wrong information, they will disseminate wrong information. That’s clearly a big risk at the moment.
Another big risk is that, currently, computational power related to LLMs or other AI tools is concentrated in big tech. Stanford is outperformed about a hundredfold, maybe even a thousandfold, compared to Apple or Microsoft or Meta. The last Nobel Prize was actually to DeepMind and Google because of computational power; they dealt with a problem that academics and researchers had not been able to solve over decades. And, based on computational power, they could make a major difference.
Making a major difference in the right way is wonderful. But having the computational power imbalance, and doing research that is then within the remit of these big tech companies—to release, or not release, or selectively release, or selectively present, or selectively disseminate—could be really horrible. Academics then would be left with their little toys, their old-fashioned tools that, “Well, you can play and publish a few million papers. Who cares? The kids need to play.”
But the real, quote, unquote, “evidence”—non-visible evidence, subverted evidence, distorted evidence—would be in the hands of these big tech companies.
Stefano Feltri
Okay, thank you so much. That was great.
Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.