_OUR GUEST_

Brian Nosek, a pioneering psychologist and professor at the University of Virginia, shook the scientific establishment in 2015 when his Reproducibility Project revealed that only 36 percent of published psychology studies could be successfully replicated — exposing a crisis at the heart of scientific research.

As co-founder of the Center for Open Science, Nosek has built an organization dedicated to increasing transparency and accountability in scientific practice.

His work challenges the academic incentive structure that rewards publishing new, positive results over methodological rigor and accurate findings, pushing researchers to focus on career advancement rather than discovering truth.

_WHAT WE DISCUSSED_

  • Brian Nosek reveals the reproducibility crisis in science and the troubling research culture

  • How incentive structures in academia push scientists toward career advancement over truth-seeking

  • The role of decentralized skepticism in science

  • How solutions like Registered Reports commit to publishing studies regardless of outcome

  • The impact of bureaucracy and funding biases on research

_THE INTERVIEW_

This interview was auto-transcribed and edited for clarity.

Ari: I think this topic is one that a lot of people aren't tapping into. The media isn't covering it as much as I think it should. It seems incredibly interesting, especially because of how much of a role academia and the science world play in everyday life.

The way politicians make policies, the way we're recommended to do certain things—science is very impactful in our lives.

I remember just a couple years ago, at a peak moment of division in the country, people had signs in front of their houses that said, "Science is real." Clearly, it's important for us, and there's something going on.

Can you give a quick introduction about exactly what is happening and the work that you've been doing in that field?

Brian: Yeah, sure. Science is the most important long-term investment for humanity. Everything we know about progress, innovation, and how we improve our standard of living is based on scientific findings. We need science. We thrive from science.

Our history is shaped by the advancement of science, but science doesn't operate optimally. It never has. It's a process where we're constantly trying to figure out how to do this better. What gets in the way of discovery? What helps us be confident in research findings and results, and able to apply those results?

We grow up learning about the scientific method, but really, there's no single method. If there's a method, it's "Can we do this better?" So we're constantly learning not just about new results and claims, but also how to better try to discover reality. That's the broad context of what science is about.

The recent reckoning and debates about reproducibility are rooted in recognizing that parts of the research culture itself may be interfering with progress. When we talk about improving science, we're often talking about specific methodology —like building a better telescope.

But the research culture concern, which is where I work, is about the system of rewards and the processes by which we conduct and communicate our findings. These might be interfering with discovery.

To be concrete: I advance in my career as a researcher by publishing in journals. The more I publish, and the more prestigious the journal, the further I can advance. It helps me get jobs and keep my job. But not everything gets published.

Some things are more likely to get published than others. You're more likely to get published by finding something new, rather than getting more evidence about something already claimed.

Positive results are more publishable than negative ones. And if all your evidence is neat and tidy, it supports the story you tell in the paper, which is more publishable. But science rarely has neat and tidy stories.

Science is hard. We're pushing the boundaries of knowledge. We don't know how things work, and we're trying to figure it out. There are exceptions, false starts, and things that don't fit. But the reward system demands beautiful stories for career advancement.

So there's a conflict of interest between what's actually happening in my lab and what I need to get published. That conflict might lead me, unintentionally, to leave out findings that don't fit, or rationalize analyses that make things look more publishable.

At its core, the problem we're trying to tackle is how to separate the reward system from producing a certain kind of paper, and make it more about rigor and reproducibility. That should be the basis of reward.

Ari: It makes total sense when you break it down to academia as a collection of individuals. Each person has their own incentive structure, just like journalists might have their own biases or political preferences.

Scientists could be looked at the same way, especially when it comes to advancing their careers.

When you paint that picture, and I'm a natural skeptic by disposition, it makes me want to be very skeptical of everything coming out of the science world and new research.

What is the right way, as someone who is not in the science world, to look at things now through that lens — that there's this crisis happening?

Brian: That's a great question. While I lay that out in a dire way, at the same time, science is making enormous progress. We're discovering amazing new things. The James Webb telescope is finding incredible things about the universe. That's true across all areas of science.

So the reward system is a challenge, but not because we can't believe any science. It's because the reward system creates friction and slows the pace of discovery. We're not getting as much out of our research investments as we could.

The way I think about this is to recognize a key feature of science: it's decentralized. That's the point — your natural skepticism is built into the system. There's no boss of science who decides what's true and what's false.

The President of the National Academies doesn't dictate what's true. Nobody does. Science makes progress through open discussion, collaboration, providing evidence, questioning evidence, and challenging it.

Over time, the evidence converges on things that are well supported. Ideas that aren't supported move off to the fringe. It's not that nobody believes them, but the system corrects that through convergence.

The inhibition from these cultural problems is that it just makes it harder to make corrections over time. We're all trying to advance our careers, so we're putting more error into the process than we might otherwise.

What I suggest, and what I do myself when thinking about scientific evidence in fields where I'm not an expert, is to look for converging evidence. Has this been wrestled with from multiple points of view? Is it a single study, or is it an accumulation of evidence that's been challenged over time?

A good example is the perennial article about coffee: "Coffee's good for you." "No, coffee's bad for you." "Wine is great for you." "No, don't drink wine." These stories are often based on a single study, but people's diets are complex. There are so many variables. It's hard for any one study to narrow to a solution, but across hundreds of studies, we might converge toward a confident answer.

That's the main thing I use to benchmark what to believe and what not to believe.

Ari: One thing you mentioned is that science is decentralized and driven by skepticism, that is supposed to create consensus as people try to figure out the truth. I want to challenge that a bit.

If we look at academia in America, the government is a big player in giving grants and choosing what gets funded. Then you have academic institutions, like universities, which are criticized for having a liberal or progressive bias. The government, depending on the administration, points resources in different directions.

How does the crisis of reproducing research get impacted by the fact that these institutions might be politicized or directing resources into politicized fields? What are your thoughts on that?

Brian: People want the things they currently believe to be true. That's a natural human state. Science as a process is supposed to confront us with reality, regardless of what we want to be true.

But scientists are part of the process, so our humanity is embedded in how we decide what questions to ask, what evidence to collect, how we interpret it, and how it's shared.

There are definitely risks of homogenous thinking — groupthink — on particular topics. Sometimes we don't question certain things because we don't want to. But science is decentralized.

Even when there are factions pushing a particular point of view, evidence is evidence. The pathways for challenge are available. Anyone can submit evidence to a scientific journal for consideration. There's a lot of humanity in that process, but it's relatively open.

What I spend my time thinking about is how to confront ourselves more effectively, because we all have ideologies and assumptions, and we're more suspicious of people with different ones. One solution is a publishing model called Registered Reports.

In this model, I lay out my research question and methodology and submit it to the journal before conducting the research. Reviewers evaluate whether it's an important question and if the methodology is effective. If they agree, the journal commits to publishing it regardless of outcome.

This changes the incentives. No one knows the results, so the journal, editors, and reviewers can't be biased by the results. I don't know the results either, so even if I want it to come out a certain way, I have to report it as planned. This model removes outcome bias and inserts a rigor bias — more rigorous research is more likely to get published.

What I love about this model is that it gets people with adversarial positions to work together. If you and I disagree on a problem, and neither of us knows the results, we have to agree on the best way to test the question. It changes the orientation to collaborating on designing the best test, because we're both sure we're right.

Ari: I can imagine people getting into science who are ambitious and career-driven might look at that solution and think, "I wasn't planning to play fairly anyway, and now this might get in my way."

Are you seeing pushback from people fundamentally opposed to this kind of reform?

Brian: There is opposition, but I don't see it much at the onset of people's careers. Where I see it is with the most senior scholars who have decades of commitment to a particular point of view.

There's a saying in science: "Science advances one funeral at a time." It requires people to die before ideas can move on. It's a joke, but not really. We get invested in our areas.

That's not universally true. I have senior scholar friends who are excited by this model. The general reaction from most researchers is high enthusiasm for this approach. Very few people get into science to push an agenda. It's much easier to push an agenda in other fields.

In science, people challenge you constantly, even if they agree with you. The whole orientation is criticism. Why go into a field where people just criticize you if you only want to advance your agenda?

People go in because of curiosity and wanting to contribute to knowledge. This model frees people from needing certain outcomes. Many report that when they go through it for the first time, they feel like this is what they imagined science would be—curiosity, asking questions, wanting to know the answer.

Sometimes things don't work out, and that's fine, but you should still get credit for rigorous work, even if it doesn't pan out.

Ari: You just said people expect that's what science would be like without the current system. What are the other problems in the system that stop genuinely interested people from doing research?

I've had friends who went to university, wanted to get involved in research, and ended up disenfranchised by bureaucracy and the hoops they had to jump through.

What is it in science that might be preventing people from just doing research and getting to the truth?

Brian: You raise the most mundane, but probably most important one: bureaucracy. Senior scientists, in several surveys, report that about 42 percent of their time is spent filling out forms for grants, writing proposals, and reporting on research, rather than actually doing research.

Nobody goes into science wanting to spend time writing grant proposals, but it's a necessity because that's how the system works.

There's more competition for jobs and money. Some competition is good, but when it's extreme, it produces counterproductive behaviors that interfere with the process.

There's a lot of opportunity and experiments being tried to innovate on funding and lower the administrative burden, so researchers can focus on the work itself.

One concrete example: most research funding is project-based. Reviewers deal with a hundred proposals and fund five. They want to fund the ones that are good and likely to succeed, so review committees get risk-averse.

The most outrageous, interesting, and provocative proposals often get averaged out and end up in the middle of the pack. The things that rise to the top are safe and incremental.

One way to challenge that is a golden ticket system. Each reviewer gets to pick one proposal to fund, no matter what. That way, high-risk, high-reward ideas can get funded if even one person is willing to take a chance. Trying different ways to distribute resources could accelerate discovery.

Ari: I want to ask how you would respond to an argument I used to see online, especially in right-of-center discourse during the pandemic. The topic was climate change.

I'm someone who has never really dug into climate change because it's almost infinite, and as someone without a science background, I can't come to a decision on what I believe.

But one argument that resonated with me was that scientists working on climate change get grants specifically to look into it. If they started looking into parts that could show climate change didn't exist, they'd be out of jobs.

That incentive structure makes me want to be skeptical about that research and research in general.

How would you respond to that argument?

Brian: It's a good question. I see that view everywhere. It may be more political in climate change, but in any field, if funding depends on certain results, people worry you'll always get those results.

There are a couple of responses. First, in a decentralized community, you need multiple types of funders making decisions about what gets funded, rather than a single source.

In the US, institutions like NSF, NIH, and DARPA all fund research, and even within them, decision-making is decentralized. Reviewers are constantly changing, which fosters diversity in evaluation. There are also private funders with their own interests.

Another argument is that if you, as a researcher, could provide credible evidence that challenges the mainstream narrative — whether about climate change, vaccines, or anything else—you would make your career. Actual credible evidence that challenges the mainstream is the most desirable evidence in any field. You'd become known for it, just like Einstein challenging Newton.

It's hard, though. People will push back and criticize you, but the system is designed to sort that out. Plate tectonics is a good example. The original researchers were laughed at, but over time, evidence accumulated, and the view changed.

Ari: Let's say it's not something like plate tectonics or gravity, but something like climate change, where billions of dollars come from funders with political ties.

If someone came out with credible evidence that climate change might not be real, wouldn't it be natural to expect those people to be excommunicated, especially in a setting where people are emotionally driven?

Brian: There are two answers. First, you underestimate the emotions attached to things like plate tectonics or snail shapes — researchers care deeply about their topics, even if the public doesn't. There's a lot of passion in every debate, but it's usually a small, passionate community.

If the system isn't decentralized and open, it can create problems for getting to the truth. You have to go through the gauntlet like everyone else. People who disagree with you also have to go through it. We'll see what survives.

Your other point is a real concern. It's not just climate science — it's health, nutrition, pharma, and more. Companies with billions on the line fund research on their products. There are lots of cases where political, ideological, or financial stakes influence the process. As science improves, these are areas to keep poking at — how do we address conflicts?

The first step is conflict of interest statements. Most journals now require you to disclose your funding sources and other conflicts. It's not that you can't do the research if you're funded by a company, but readers should know and evaluate the research in that context.

Another solution is registration. All pharmaceutical treatments must be registered on clinicaltrials.gov before the research is done, laying out the plan. That way, there's a public commitment before results are known.

Anyone can check if a study was registered and whether the results were reported. If not, it's clear something's off. This creates accountability and helps recognize where biases might come through.

Ari: It seems like you've been trying to fix this. You have different solutions, and you're optimistic that it can be fixed.

How does the world look different if these fixes go through? Not just what science looks like inside, but what does it look like for a regular person reading the news? What changes for us?

Brian: That's a great question. The answer is actually subtle in daily life. These solutions don't fundamentally change science making progress.

What they do is get higher yield for those investments. We will sort things out eventually — whether a solution is right or wrong, or a theory is supported or not. But our point is, we're not willing to wait. If we could solve something in five years, why spend fifteen?

Improving the process of sorting through evidence by aligning rewards for getting it right, not just being right, is key. Researchers should be celebrated for showing when their ideas are wrong, not punished for it.

These changes might not be visible to the average person, but we should be able to see more easily the basis of findings and why we should be confident in them.

If we can improve transparency and include the public in the process, even if most people won't read technical articles, the fact that they could is important. It lets people see the process, the messiness, the debates, and the convergence of evidence over time.

That won't work in a newspaper or simple media, but opening the windows so people can see how evidence accumulates will improve trust in research and help the public recognize what they can trust.

The outcomes are just outcomes. What we want is for everyone to see and trust the process of discovery. People will disagree and have ideologies, but the system will work it out.

Ari: Last question for you. You're driving, you see a billboard that says in massive words, "Science is real." How do you react?

Brian: I agree, it's real. I see it all the time. But some of the messaging errors during COVID, for example, were about "the science" as an appeal to authority. It wasn't "trust the science because here's the process that led to this claim."

I think we underestimate the public's understanding and willingness to go along with uncertainty. Science is a process of reducing uncertainty. We have lots of ideas and gather evidence to sort through those that have merit. There's always uncertainty, and we're reducing it over time.

During COVID, there was a crisis of speed, and communicating uncertainty was hard. Instead of representing uncertainty and saying, "We don't know, here's our best evidence, but policymakers have to make a decision," the message was often, "Here's the answer," ignoring uncertainty. I think the public would have recognized the uncertainty.

So that's a more elaborate answer than you were looking for, but I like the idea. What I want from the community is more communication about uncertainty — where we're at with our confidence in a claim.

Reply

or to participate