My colleague Kevin Gross and I have a new preprint up on the arXiv.

Just for fun, rather than a simple text explainer, a thread with some slides for a talk I'm giving at https://www.icssi.org/ tomorrow.

Here's the paper itself: Rationalizing risk aversion in science. https://arxiv.org/abs/2306.13816

International Conference on the Science of Science and Innovation

International Conference on the Science of Science and Innovation
The basic issue at hand is high-risk, high-return science. There is widespread sentiment, and even some scattered empirical evidence, that scientific research within academia is too cautious and that higher-risk, higher-return research would yield more progress more quickly.
If you ask people why we don't see more high-risk science, you get different answers. Researchers tell you that granting agencies won't fund it. Funders tell you that researchers won't propose it.

A couple of years ago, we published a PNAS paper that tackles the researchers' side of the story, explaining why grant review panels may be unlikely to fund risky studies.

https://www.pnas.org/doi/10.1073/pnas.2111615118

The present paper addresses the funding agencies' side, and looks at why researchers may be reluctant to take on high-risk projects even if they are funded.

To get at this, we have think about the incentives that academic researchers face.

Because it's very difficult to monitor the effort that researchers put in, academic scientists are rewarded almost exclusively for their research output.

Rewards come in the form of jobs, promotions, salary, and prestige, for example. We'll refer to these all as wages.

We note that particularly where job security and salary are concerned, scientists are risk-averse in wages.
When investing in risky research, funding agencies can hedge their bets across a portfolio of large-scale high-risk projects. Individual scientists can't typically do this.

Researchers might be willing to take on risky projects if they could be insured against that risk, with wages that didn't depend on the vicissitudes of scientific fortune.

But you can't completely ensure against the failure to get results, because bad luck is indistinguishable from loafing and you need to somehow incentivize effort.

Thus the scientific enterprise is caught in a bind. Measures necessary to incentive effort necessarily dissuade researchers from taking enough risks.

A social planner might induce effort and risk by paying large bonuses for major results.

But given any budget to allocate among researchers, this is inefficient from the researchers' perspective because risk aversion means that to increase the utility of a high earner even modestly, you have to take a lot of wages away from low earners at high utility cost.

But here's the thing: science has no external social planner.

Scientists themselves determine what results are considered worthwhile, and how worthwhile. They decide who is hired, promoted, given wages, is awarded prizes, and garners esteem. In other words, they set their own wages, of course subject to a budget constraint.

@ct_bergstrom Re: "science's social planner".

I'm wondering at a glance, the role that major prizes might play in this, such as the X-Prize for the HGP or similar.