RE: https://neuromatch.social/@neuralreckoning/116286691385862197

Science would be so much better if we did review (of grants and papers) constructively and collaboratively, instead of only using them to produce binary accept/reject decisions. To do that, we have to separate review processes from decisions. One idea for grants 👇

@neuralreckoning I can see this working for some fields, in the way that pre-registration works for research that can be planned and executed linearly. IME wet lab experimental science is more messy and convoluted. A back-and-forth with a committee could hone a perfect plan on paper, and even with consensus that this is the best approach, failure is a likely outcome (for unforeseen reasons). To quote Mike Tyson "everyone has a plan until they get punched in the mouth”.
@steveroyle I don't see it as honing a plan with a committee. In my view it's more like having a mandatory independent critical opinion. It's much easier to find issues with someone else's ideas than your own. The end result doesn't have to be as detailed as a preregistered experiment, and indeed I don't find pre-registration compelling at all for precisely the reason you state.
@neuralreckoning OK. But currently, I write a proposal and my colleagues give me feedback (this won't work, have you thought of this). Then I submit it and the reviewers give me more feedback to hone the plan. There's no hiding details beyond not having space to describe everything. And I think an applicant needs to say how they would tackle the question to know that it is tractable. So, there's independent critical opinion already in the current process.
@steveroyle yes but the independent critical opinion is only destructive (get the grant rejected) not constructive. And I would guess that when you give your draft proposal to colleagues you're asking them to help you get it funded not how to do the research better, so both you and your colleagues aren't engaging in the type of constructive critical process I'm imagining.
@steveroyle even if this isn't consciously what you're doing, I think the fact that everyone understands the context of the grant writing activity, it's implicitly what you all know you're doing.
@neuralreckoning yes I agree that the game is grantspersonship. It's also true that external reviewers’ critique is viewed negatively by panels, even when it’s constructive. I don't think critique is "only destructive” but I can see that decoupling the two (decision and criticism) has benefits. For sure, I've had proposals go unfunded where I’m sure the idea was sound but it didn't fly for some reason. Would be great to start from the agreement that it's worth pursuing the idea and build up.

@neuralreckoning @steveroyle

#NIH

When I started my faculty career (2000), it was the tail end of the 25-page you-can-survive-on-one-R01 grant structure.

1. Although they did have triage for senior investigators, all early investigator grants (and all fellowships) were reviewed every time. In practice, less than a third of the grants were actually triaged in the first study sections I saw. (~2004)

2. Grants took 1-2 cycles, but generally came back to the same reviewers and there was a definite belief (consistent with my anecdotal observations) that by the second round, you would know if you were going to get a fundable score. (Think how paper reviews still work in many journals.)

3. Grant scores (1.0 - 5.0) were clear messages: 1-2 meant "Fund it!" 2-3 meant "Fund if possible", (think "minor revision"), 3-4 meant "I want to see it again after you fix the flaws" (think "major revision"), 4-5 meant "This is problematic, chase something else" (think "reject").

The more I saw this system, the more it made sense. It meant that reviews were about helping make the science better (and they were detailed method reviews!). It meant that you could be pretty sure you would be funded if you planned well enough ahead and so you could survive on 1 R01.

But then NIH decided they were "wasting reviewer's time" and that "they just needed to find the fundable grants" and "it wasn't reviewer's roles to tell people how to do science" so "they should just judge the questions". They started triaging 50% of all grants. They shifted to this non-linear 1-9 scale (1-8 = 1.0-2.5 old system, 9 = everything else --- which no one actually follows, making the system even noisier). They cut the grants down to 12 [6 for small grants] pages so that they were "just judging the questions". (Which means that there's no methods to critique anymore.) And they insisted that reviewers were supposed to score every grant independently of the history, as a new attempt.

The more I saw this system, the worse it appeared to me.

@adredish @steveroyle an elegant system for a more civilised age?

@neuralreckoning @adredish @steveroyle

Or an age with more funds and less screaming in politics, i.e., more long-term thinking, at least seemingly so from the (temporal) distance.