"On being asked what made the LMB such a remarkable place, Max answered: ‘Creativity in science, as in art [referring to the Renaissance in Florence], cannot be organised. It arises spontaneously from individual talent. Well-run laboratories can foster it, but hierarchical organisations, inflexible bureaucratic rules, and mountains of futile paperwork can kill it. Discoveries cannot be planned, they pop up, like Puck, in unexpected corners.’"

From Daniela Rhodes' 2002 piece on #MaxPerutz
https://www.embopress.org/doi/full/10.1093/embo-reports/kvf103

#academia #MaxPerutz #MRCLMB #podcast

Every time I fill in a reimbursement form for what amounts to pennies, articulate a justification or a dispensation for a purchase, write half a dozen emails for something that costs £70 (not counting the time of those involved), or write a "PDR" for a lab member who is leaving in a couple months, or do another compulsory training course on a topic I could have written a scholarly paper about it myself, I think of #MaxPerutz's statement above.

The academic scientific enterprise could be organised so much more effectively. Start by evaluating scientists by what they have done, not what they will do; the rest unfolds from that and leads to enormous savings in time (for scientists) and money (far less admin costs).

Government, funding bodies, are you listening? Are you ready to let go, and evaluate scientists on past work, and save hundreds of millions in the process? Or better yet, reallocate them to science itself for an even bigger impact?

#academia

@albertcardona @albertcardona all for encouraging creative science. But doesn’t “evaluate scientists based on past work” entrench a rich-get-richer system?

@cian @albertcardona

There was a long discussion on DrugMonkey's blog back in the day about alternative systems. What is needed is a system that is both fair to everyone on entry (before a track record is available), but then can be based on past work for continuation. That way people can survive on one R01 and don't need to scramble for grants all the time, and can instead spend it on doing real science.

The best system that we came up with was a complex entry that was similar to today's granting process, but then with five-year checks based on past successes. If you lost your five-year check, you were thrown back into the complex granting entry system.

The big problem is that we spend a lot of time writing grants that are literally wasted time. In fact, there was a study that found that the contest model of grants that NIH and other programs run (trying to find the "best" in a world where "best is unknowable" and 75% of grants are generally worth funding) is a terribly wasteful system. They recommended a system of "good enough or not" and then random (or distributed) beyond that.

https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000065

Contest models highlight inherent inefficiencies of scientific funding competitions

Scientists waste substantial time writing grant proposals, potentially squandering much of the scientific value of funding programs. This Meta-Research Article shows that, unfortunately, grant-proposal competitions are inevitably inefficient when the number of awards is small, but efficiency can be restored by awarding funds through a modified lottery, or by weighting past research success more heavily in funding decisions.

@adredish @albertcardona that makes a lot of sense to me. Although still unclear how to evaluate people on track record without introducing biases (personal, scientific)

@cian @albertcardona

There are always biases. The current system has lots of biases. An important question is whether judging on track record has fewer biases than judging on "potential". (My guess is it does because track record can be more easily objective than the nebulous question of potential.)

The best way to mitigate biases is to start with a list of specific questions that you are going to ask of each candidate [it is important to generate that list in advance] and for each candidate, answer the question specifically.

@cian @albertcardona

Importantly, that trick of asking specific questions does not address systemic bias where past differences in opportunities lead to differences in track record. Those need to be addressed separately through explicit correction mechanisms. But those systemic biases exist even more in judging potential than they do in judging track record since "potential" is more nebulous and track record can be compared to the opportunities that were available.