๐Ÿ“ฃ๐Ÿงช ๐—ก๐—ฒ๐˜„ ๐—ฟ๐—ฒ๐˜€๐˜‚๐—น๐˜๐˜€: ๐—˜๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฎ๐˜๐—ถ๐—ป๐—ด ๐—ฆ๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐—ณ๐—ถ๐—ฐ ๐—ช๐—ฟ๐—ถ๐˜๐—ถ๐—ป๐—ด ๐˜„๐—ถ๐˜๐—ต ๐—˜๐˜…๐—ฝ๐—ฒ๐—ฟ๐˜ ๐—ฃ๐—ฟ๐—ฒ๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐—ฐ๐—ฒ๐˜€

๐Ÿ‘‰ Today we are presenting our first two papers that build on HAI-Co2, our recent position paper on human-ai text co-construction in expert domains:

Scientific writing is an open-end expert task utilizing user preferences. Evaluating it means capturing both general and expert-specific quality criteria. Our new works push forward preference-aligned evaluation.

(1/๐Ÿงต)

๐Ÿงพ ๐—ฅ๐—ฒ๐˜„๐—ฎ๐—ฟ๐—ฑ ๐— ๐—ผ๐—ฑ๐—ฒ๐—น๐—ถ๐—ป๐—ด ๐—ณ๐—ผ๐—ฟ ๐—ฆ๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐—ณ๐—ถ๐—ฐ ๐—ช๐—ฟ๐—ถ๐˜๐—ถ๐—ป๐—ด ๐—˜๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฎ๐˜๐—ถ๐—ผ๐—ป
Furkan ลžahinuรง, Subhabrata Dutta, Iryna Gurevych

https://arxiv.org/abs/2601.11374

(2/๐Ÿงต )

Reward Modeling for Scientific Writing Evaluation

Scientific writing is an expert-domain task that demands deep domain knowledge, task-specific requirements and reasoning capabilities that leverage the domain knowledge to satisfy the task specifications. While scientific text generation has been widely studied, its evaluation remains a challenging and open problem. It is critical to develop models that can be reliably deployed for evaluating diverse open-ended scientific writing tasks while adhering to their distinct requirements. However, existing LLM-based judges and reward models are primarily optimized for general-purpose benchmarks with fixed scoring rubrics and evaluation criteria. Consequently, they often fail to reason over sparse knowledge of scientific domains when interpreting task-dependent and multi-faceted criteria. Moreover, fine-tuning for each individual task is costly and impractical for low-resource settings. To bridge these gaps, we propose cost-efficient, open-source reward models tailored for scientific writing evaluation. We introduce a two-stage training framework that initially optimizes scientific evaluation preferences and then refines reasoning capabilities. Our multi-aspect evaluation design and joint training across diverse tasks enable fine-grained assessment and robustness to dynamic criteria and scoring rubrics. Experimental analysis shows that our training regime strongly improves LLM-based scientific writing evaluation. Our models generalize effectively across tasks and to previously unseen scientific writing evaluation settings, allowing a single trained evaluator to be reused without task-specific retraining.

arXiv.org

๐Ÿ“‘ ๐—˜๐˜…๐—ฝ๐—ฒ๐—ฟ๐˜ ๐—ฃ๐—ฟ๐—ฒ๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐—ฐ๐—ฒ ๐—ฏ๐—ฎ๐˜€๐—ฒ๐—ฑ ๐—˜๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ผ๐—ณ ๐—”๐˜‚๐˜๐—ผ๐—บ๐—ฎ๐˜๐—ฒ๐—ฑ ๐—ฅ๐—ฒ๐—น๐—ฎ๐˜๐—ฒ๐—ฑ ๐—ช๐—ผ๐—ฟ๐—ธ ๐—š๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป
Furkan ลžahinuรง, Subhabrata Dutta, Iryna Gurevych

https://arxiv.org/abs/2508.07955

(3/๐Ÿงต)

Expert Preference-based Evaluation of Automated Related Work Generation

Expert domain writing, such as scientific writing, typically demands extensive domain knowledge. Although large language models (LLMs) show promising potential in this task, evaluating the quality of automatically generated scientific writing is a crucial open issue, as it requires knowledge of domain-specific criteria and the ability to discern expert preferences. Conventional task-agnostic automatic evaluation metrics and LLM-as-a-judge systems, primarily designed for mainstream NLP tasks, are insufficient to grasp expert preferences and domain-specific quality standards. To address this gap and support realistic human-AI collaborative writing, we focus on related work generation, one of the most challenging scientific tasks, as an exemplar. We propose GREP, a multi-turn evaluation framework that integrates classical related work evaluation criteria with expert-specific preferences. Our framework decomposes the evaluation into smaller fine-grained dimensions. This localized evaluation is further augmented with contrastive examples to provide detailed contextual guidance for the evaluation dimensions. Empirical investigation reveals that our framework is able to assess the quality of related work sections in a much more robust manner compared to standard LLM judges, reflects natural scenarios of scientific writing, and bears a strong correlation with the assessment of human experts. We also observe that generations from state-of-the-art (SoTA) LLMs struggle to satisfy validation constraints of a suitable related work section.

arXiv.org
SOCIAL MEDIA TITLE TAG

SOCIAL MEDIA DESCRIPTION TAG TAG