My colleague Kevin Gross and I have a new preprint up on the arXiv.

Just for fun, rather than a simple text explainer, a thread with some slides for a talk I'm giving at https://www.icssi.org/ tomorrow.

Here's the paper itself: Rationalizing risk aversion in science. https://arxiv.org/abs/2306.13816

International Conference on the Science of Science and Innovation

International Conference on the Science of Science and Innovation
The basic issue at hand is high-risk, high-return science. There is widespread sentiment, and even some scattered empirical evidence, that scientific research within academia is too cautious and that higher-risk, higher-return research would yield more progress more quickly.
If you ask people why we don't see more high-risk science, you get different answers. Researchers tell you that granting agencies won't fund it. Funders tell you that researchers won't propose it.

A couple of years ago, we published a PNAS paper that tackles the researchers' side of the story, explaining why grant review panels may be unlikely to fund risky studies.

https://www.pnas.org/doi/10.1073/pnas.2111615118

The present paper addresses the funding agencies' side, and looks at why researchers may be reluctant to take on high-risk projects even if they are funded.

To get at this, we have think about the incentives that academic researchers face.

Because it's very difficult to monitor the effort that researchers put in, academic scientists are rewarded almost exclusively for their research output.

Rewards come in the form of jobs, promotions, salary, and prestige, for example. We'll refer to these all as wages.

We note that particularly where job security and salary are concerned, scientists are risk-averse in wages.
When investing in risky research, funding agencies can hedge their bets across a portfolio of large-scale high-risk projects. Individual scientists can't typically do this.

Researchers might be willing to take on risky projects if they could be insured against that risk, with wages that didn't depend on the vicissitudes of scientific fortune.

But you can't completely ensure against the failure to get results, because bad luck is indistinguishable from loafing and you need to somehow incentivize effort.

@ct_bergstrom can you not just count preprints and divide by number of PhD and postdoc? It's only the positive result publication bias that makes this a thorny issue isn't it?
@neuralreckoning yes, I think the idea is that high risk research often yields very little that’s publishable despite large amounts of work.
@ct_bergstrom right so if we told people we were measuring them based on preprints (including negative results that wouldn't be journal publishable) and not on journal publication they could still have a mechanism to show they had been working hard even if they didn't get journal publications.
@neuralreckoning @ct_bergstrom also, if you get a “non-working” experiment, or just a “negative result”, you might be less likely to take the time to turn it into a preprint because you know it might not be accepted for peer-review publication anyway… so the measure you suggest will (unfortunately) not detect all of the work that has been done…
@elduvelle @ct_bergstrom right but if you know you were being judged on that basis and not on the basis of published work you'd have the incentive to do it!

@elduvelle @neuralreckoning @ct_bergstrom

People get all worked up over negative results, but a real negative result is actually really hard to show. The problem is that a negative result can be negative for lots of reasons, most of them boring.

For example, maybe the DREADD didn't affect rat behavior because there was no DREADD in the virus. (That happened to us once.) Or maybe the human subject pool was tasked wrong because they didn't understand the instructions. (That happened to us once.)

To get a real negative result, you have to have positive controls to show that all of the techniques are doing what you think they are and that the negative result is not a consequence of a trivial outcome.

Yes, you need controls for positive results as well, but it's easier to determine what those controls are, and reviewers tend to demand those controls. People who try and fail to publish negative results almost never have the right controls for those negative results (which are not the same at all as the controls you need for the positive results). A well-structure negative result experiment should be very publishable.