New blog post on the NeurIPS'21 experiment re authors' perceptions of their own papers!

https://blog.ml.cmu.edu/2022/11/22/neurips2021-author-perception-experiment/

Key findings:

1) Authors significantly overestimate their papers' chances of acceptance. By like a LOT.

>

How do Authors' Perceptions about their Papers Compare with Co-authors’ Perceptions and Peer-review Decisions?

Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan(NeurIPS 2021 Program Chairs) Charvi Rastogi, Ivan Stelmakh, Zhenyu Xue, Hal Daumé III, Emma Pierson, and Nihar B. Shah There is a considerable body of research on peer review. Within the machine learning community, there

Machine Learning Blog | ML@CMU | Carnegie Mellon University

@hal Odd that the lower 1/3rd tracks the yellow dotted line. But above it's basically horizontal.

It could also be said that publications are only slightly okay at rejecting the bottom quarter of papers, and for the better papers is just a 30% chance tossup.

@HenkPoley @hal yeah, i had this observation too — it looks like about 30pts absolute of the 70% rejection is corroborated by author's own opinions, but outside of that, bad correlation with authors own estimates
@trochee @HenkPoley yup just to be clear these are - if people answered the question they were asked - their opinion *of whether it was likely to get in* not their opinion of *whether it should get in*.
@hal @HenkPoley and it sounds like many authors approximated "will get in" with "deserves to get in"
@trochee @HenkPoley why do you conclude that?

@hal @HenkPoley that's what the down elbow suggests to me — overall, if you think you're not in the bottom 30% , you have a poorly calibrated estimator for acceptance

Maybe that's not the same thing