Yesterday, an #OpenReview vulnerability led to the leak of reviewer identities of all the major academic AI conferences, including the ongoing #ICLR2026 conferences. #ICLRLeaks This is both a huge disaster, and an opportunity to tackle the serious flaws of AI research. eu.36kr.com/en/p/3572028...

Academic Circle in Uproar: ICL...
Academic Circle in Uproar: ICLR Reviewers Reveal Identities, Low Scores Given by Friends

True Open Review: Unveiling Transparent and Authentic Evaluations

Fears include: - Author retaliations against negative reviews. - Bribery (evidence already emerging). Findings (or rather, confirmed long-standing suspicions): - Massive abuse of (fully) AI-generated reviews. - Conflicts of interest (e.g. reviewers rejecting papers that compete with their own).
In 2025, I myself reviewed a submission, whose sole theorem and proof were clearly AI-generated. Embarrassingly, the theorem was uninteresting, its assumptions were ill-justified and its proof was flawed. The paper got rejected. But could it have been accepted by AI-generated (or lazy) reviews?
I believe that the AI research failures exposed by the #ICLRLeaks are illustrations of broader societal concerns. While "innovation" is glorified, with their authors earning millions, regulation (here in the form of reviewing) is botched, automated and under-funded. This cannot be sustainable.
Any powerful system should pay a lot of attention (& money) to corruption risks. It's costly, but essential. AI research has become an extremely powerful system, as it is now affecting trillion-dollar valuations & geopolitical decisions. But it has not given itself the means to prevent corruption.
I should stress that this is not limited to AI research though. Papers have been found to contain hidden instructions designed to hack AI-generated reviews. (to be fair, I in fact believe that the publication standards in Computer Science are higher than elsewhere) www.nature.com/articles/d41...

Scientists hide messages in pa...
Scientists hide messages in papers to game AI peer review

Some studies containing instructions in white text or small font — visible only to machines — will be withdrawn from preprint servers.

It's scary how #AIHype is taking over academia. This @[email protected] paper found out that LLM suck at tabular tasks (which should not be surprising, unless they memorized the test set). Yet the abstract is still phrased as if they were some kind of breakthrough (WTF?!?). www.nature.com/articles/s41...
This is a nice entry on the incident and the takeaways. forum.cspaper.org/topic/191/ic... I strongly recommend acceptance ;)

ICLR = I Can Locate Reviewer: ...
ICLR = I Can Locate Reviewer: How an API Bug Turned Blind Review into a Data Apocalypse

On the night of November 27, 2025, computer-science Twitter, Rednote, Xiaohongshu, Reddit and WeChat group lit up with the same five words: “ICLR can open t...

CSPaper: peer review sidekick
Artificial intelligence research has a slop problem, academics say: ‘It’s a mess’

AI research in question as author claims to have written over 100 papers on AI that one expert calls a ‘disaster’

The Guardian