@neuralreckoning @steveroyle
#NIH
When I started my faculty career (2000), it was the tail end of the 25-page you-can-survive-on-one-R01 grant structure.
1. Although they did have triage for senior investigators, all early investigator grants (and all fellowships) were reviewed every time. In practice, less than a third of the grants were actually triaged in the first study sections I saw. (~2004)
2. Grants took 1-2 cycles, but generally came back to the same reviewers and there was a definite belief (consistent with my anecdotal observations) that by the second round, you would know if you were going to get a fundable score. (Think how paper reviews still work in many journals.)
3. Grant scores (1.0 - 5.0) were clear messages: 1-2 meant "Fund it!" 2-3 meant "Fund if possible", (think "minor revision"), 3-4 meant "I want to see it again after you fix the flaws" (think "major revision"), 4-5 meant "This is problematic, chase something else" (think "reject").
The more I saw this system, the more it made sense. It meant that reviews were about helping make the science better (and they were detailed method reviews!). It meant that you could be pretty sure you would be funded if you planned well enough ahead and so you could survive on 1 R01.
But then NIH decided they were "wasting reviewer's time" and that "they just needed to find the fundable grants" and "it wasn't reviewer's roles to tell people how to do science" so "they should just judge the questions". They started triaging 50% of all grants. They shifted to this non-linear 1-9 scale (1-8 = 1.0-2.5 old system, 9 = everything else --- which no one actually follows, making the system even noisier). They cut the grants down to 12 [6 for small grants] pages so that they were "just judging the questions". (Which means that there's no methods to critique anymore.) And they insisted that reviewers were supposed to score every grant independently of the history, as a new attempt.
The more I saw this system, the worse it appeared to me.