๐Ÿ”ฅ ๐— ๐—ฒ๐˜๐—ฎ-๐—ฟ๐—ฒ๐˜ƒ๐—ถ๐—ฒ๐˜„๐—ถ๐—ป๐—ด ๐—ถ๐˜€ ๐—บ๐—ผ๐—ฟ๐—ฒ ๐˜๐—ต๐—ฎ๐—ป ๐—ฎ ๐˜€๐˜‚๐—บ๐—บ๐—ฎ๐—ฟ๐—ถ๐˜‡๐—ฎ๐˜๐—ถ๐—ผ๐—ป - ๐—ถ๐˜โ€™๐˜€ ๐—ฑ๐—ฒ๐—ฐ๐—ถ๐˜€๐—ถ๐—ผ๐—ป-๐—บ๐—ฎ๐—ธ๐—ถ๐—ป๐—ด.

In our new paper, โ€œ๐——๐—ฒ๐—ฐ๐—ถ๐˜€๐—ถ๐—ผ๐—ป-๐— ๐—ฎ๐—ธ๐—ถ๐—ป๐—ด ๐˜„๐—ถ๐˜๐—ต ๐——๐—ฒ๐—น๐—ถ๐—ฏ๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป: ๐— ๐—ฒ๐˜๐—ฎ-๐—ฟ๐—ฒ๐˜ƒ๐—ถ๐—ฒ๐˜„๐—ถ๐—ป๐—ด ๐—ฎ๐˜€ ๐—ฎ ๐——๐—ผ๐—ฐ๐˜‚๐—บ๐—ฒ๐—ป๐˜-๐—ด๐—ฟ๐—ผ๐˜‚๐—ป๐—ฑ๐—ฒ๐—ฑ ๐——๐—ถ๐—ฎ๐—น๐—ผ๐—ด๐˜‚๐—ฒโ€, we ask how AI can support meta-reviewers ๐—ถ๐—ป ๐˜๐—ต๐—ฒ ๐—ฑ๐—ฒ๐—ฐ๐—ถ๐˜€๐—ถ๐—ผ๐—ป ๐—ฝ๐—ฟ๐—ผ๐—ฐ๐—ฒ๐˜€๐˜€ โ€” ๐—ป๐—ผ๐˜ ๐—ท๐˜‚๐˜€๐˜ ๐—ถ๐—ป ๐˜„๐—ฟ๐—ถ๐˜๐—ถ๐—ป๐—ด ๐˜๐—ต๐—ฒ ๐—ณ๐—ถ๐—ป๐—ฎ๐—น ๐—ฟ๐—ฒ๐—ฝ๐—ผ๐—ฟ๐˜.

๐Ÿ“š ๐—ง๐—ต๐—ฒ ๐—ฝ๐—ฟ๐—ผ๐—ฏ๐—น๐—ฒ๐—บ
Meta-reviewers face a rapidly growing volume of submissions and increasingly complex discussions. While current AI can generate meta-review text via summarization, ๐—ถ๐˜ ๐—ฑ๐—ผ๐—ฒ๐˜€๐—ปโ€™๐˜ ๐˜€๐˜‚๐—ฝ๐—ฝ๐—ผ๐—ฟ๐˜ ๐˜๐—ต๐—ฒ ๐—ฐ๐—ผ๐—ฟ๐—ฒ ๐˜๐—ฎ๐˜€๐—ธ: ๐—บ๐—ฎ๐—ธ๐—ถ๐—ป๐—ด ๐—ถ๐—ป๐—ณ๐—ผ๐—ฟ๐—บ๐—ฒ๐—ฑ, ๐—ฑ๐—ผ๐—ฐ๐˜‚๐—บ๐—ฒ๐—ป๐˜-๐—ด๐—ฟ๐—ผ๐˜‚๐—ป๐—ฑ๐—ฒ๐—ฑ ๐—ฑ๐—ฒ๐—ฐ๐—ถ๐˜€๐—ถ๐—ผ๐—ป๐˜€.

๐Ÿค– ๐—ช๐—ต๐—ฎ๐˜ ๐—ด๐—ผ๐—ฒ๐˜€ ๐˜„๐—ฟ๐—ผ๐—ป๐—ด ๐˜๐—ผ๐—ฑ๐—ฎ๐˜†:
LLM-based systems often produce ๐˜ƒ๐—ฎ๐—ด๐˜‚๐—ฒ, ๐—ด๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ถ๐—ฐ, ๐˜„๐—ฒ๐—ฎ๐—ธ๐—น๐˜† ๐—ด๐—ฟ๐—ผ๐˜‚๐—ป๐—ฑ๐—ฒ๐—ฑ outputs that donโ€™t help experts.

๐Ÿ’ก ๐—ข๐˜‚๐—ฟ ๐—ณ๐—ฟ๐—ฎ๐—บ๐—ถ๐—ป๐—ด
We rethink meta-reviewing as a document-grounded dialogue between AI and the meta-reviewer: structured, interactive, and evidence-seeking.

๐Ÿ”ง By combining ๐˜€๐˜†๐—ป๐˜๐—ต๐—ฒ๐˜๐—ถ๐—ฐ ๐—ฑ๐—ถ๐—ฎ๐—น๐—ผ๐—ด๐˜‚๐—ฒ ๐—ฑ๐—ฎ๐˜๐—ฎ and ๐—ถ๐˜๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐˜ƒ๐—ฒ ๐˜€๐—ฒ๐—น๐—ณ-๐—ฟ๐—ฒ๐—ณ๐—ถ๐—ป๐—ฒ๐—บ๐—ฒ๐—ป๐˜, we train ๐˜€๐—ฝ๐—ฒ๐—ฐ๐—ถ๐—ฎ๐—น๐—ถ๐˜‡๐—ฒ๐—ฑ (๐˜€๐—บ๐—ฎ๐—น๐—น๐—ฒ๐—ฟ) ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ that better support deliberation.

โšก ๐—ž๐—ฒ๐˜† ๐—ฟ๐—ฒ๐˜€๐˜‚๐—น๐˜๐˜€
โ€ข ๐—จ๐—ฝ ๐˜๐—ผ ~๐Ÿฑ๐Ÿฌ% ๐—น๐—ฒ๐˜€๐˜€ meta-reviewing time
โ€ข ๐— ๐—ผ๐—ฟ๐—ฒ ๐—ฐ๐—ผ๐—บ๐—ฝ๐—ฟ๐—ฒ๐—ต๐—ฒ๐—ป๐˜€๐—ถ๐˜ƒ๐—ฒ, ๐—บ๐—ผ๐—ฟ๐—ฒ ๐—ฑ๐—ฒ๐˜๐—ฎ๐—ถ๐—น๐—ฒ๐—ฑ meta-reports
โ€ข Small fine-tuned models ๐—ผ๐˜‚๐˜๐—ฝ๐—ฒ๐—ฟ๐—ณ๐—ผ๐—ฟ๐—บ larger closed LLMs

The future of peer-review is not automation, but a ๐˜ฅ๐˜ช๐˜ข๐˜ญ๐˜ฐ๐˜จ๐˜ถ๐˜ฆ ๐˜ฃ๐˜ฆ๐˜ต๐˜ธ๐˜ฆ๐˜ฆ๐˜ฏ ๐˜ˆ๐˜ ๐˜ข๐˜ฏ๐˜ฅ ๐˜ฆ๐˜น๐˜ฑ๐˜ฆ๐˜ณ๐˜ต๐˜ด ๐Ÿ’ฌ๐Ÿ”ฌ

More details, resources and discussions coming soon. We look forward to engaging with the community.

๐Ÿ“„ Paper: https://arxiv.org/abs/2508.05283

๐Ÿ’ป Code and data: https://github.com/UKPLab/eacl2026-meta-review-as-dialog

๐Ÿ”— Project: https://ukplab.github.io/eacl2026-meta-review-as-dialog/

Decision-Making with Deliberation: Meta-reviewing as a Document-grounded Dialogue

Meta-reviewing is a pivotal stage in the peer-review process, serving as the final step in determining whether a paper is recommended for acceptance. Prior research on meta-reviewing has treated this as a summarization problem over review reports. However, complementary to this perspective, meta-reviewing is a decision-making process that requires weighing reviewer arguments and placing them within a broader context. Prior research has demonstrated that decision-makers can be effectively assisted in such scenarios via dialogue agents. In line with this framing, we explore the practical challenges for realizing dialog agents that can effectively assist meta-reviewers. Concretely, we first address the issue of data scarcity for training dialogue agents by generating synthetic data using Large Language Models (LLMs) based on a self-refinement strategy to improve the relevance of these dialogues to expert domains. Our experiments demonstrate that this method produces higher-quality synthetic data and can serve as a valuable resource towards training meta-reviewing assistants. Subsequently, we utilize this data to train dialogue agents tailored for meta-reviewing and find that these agents outperform \emph{off-the-shelf} LLM-based assistants for this task. Finally, we apply our agents in real-world meta-reviewing scenarios and confirm their effectiveness in enhancing the efficiency of meta-reviewing.\footnote{Code available at: https://github.com/UKPLab/eacl2026-meta-review-as-dialog

arXiv.org

---

And follow the authors Sukannya Purkayastha, Nils Dycke, and Iryna Gurevych from the Ubiquitous Knowledge Processing Lab (UKP Lab), Technische Universitรคt Darmstadt and National Research Center for Applied Cybersecurity ATHENE, as well as Anne Lauscher from the Data Science Group, University of Hamburg.

See you this week in Rabat ๐Ÿ•Œ! #EACL2026

#EACL2026 #PeerReview #ScientificPublishing #AIforScience #LLMs #DialogueSystems #Evaluation #ResearchIntegrity #NLP #MachineLearning #UKPLab