🛠️【CÔNG NGHỆ AI MỚI】Giới thiệu框架 Aphrodite - Hệ thống AI phân loại bằng chứng thành 4 mức: Đã biết/Tuyên bố/Suy luận/Phỏng đoán. Thử nghiệm với Claude cho thấy:
✅ +85% hiệu quả thu thập dữ liệu
✅ Giảm "ảo giác" AI bằng xác minh nguồn
✅ Dự đoán hệ quả ở mốc 24h/7d/30d
👉 Mã nguồn mở, tập trung minh bạch hóa quá trình suy luận của AI.

#AIFramework #AIVietnam #EvidenceBasedAI #GiảmẢoGiácAI #CôngNghệMới

https://www.reddit.com/r/LocalLLaMA/comments/1q8szuv/project_i_built_ai_reasoning_in

Work by: Max Glockner (UKP Lab), Xiang Jiang (Amazon AGI), Leonardo F. R. Ribeiro (Amazon AGI), Iryna Gurevych (UKP Lab), and Markus Dreyer (Amazon AGI).

📄 Paper: https://arxiv.org/abs/2505.05949
💾 Data: https://huggingface.co/datasets/mglockner/neoqa
💻 Code: https://github.com/amazon-science/neoqa

#ACL2025 #NLProc #EvidenceBasedAI #LLM

(3/3)

The original post was published on Twitter/X by Markus Dreyer:
https://x.com/markusdr/status/1924873660969652306

NeoQA: Evidence-based Question Answering with Generated News Events

Evaluating Retrieval-Augmented Generation (RAG) in large language models (LLMs) is challenging because benchmarks can quickly become stale. Questions initially requiring retrieval may become answerable from pretraining knowledge as newer models incorporate more recent information during pretraining, making it difficult to distinguish evidence-based reasoning from recall. We introduce NeoQA (News Events for Out-of-training Question Answering), a benchmark designed to address this issue. To construct NeoQA, we generated timelines and knowledge bases of fictional news events and entities along with news articles and Q\&A pairs to prevent LLMs from leveraging pretraining knowledge, ensuring that no prior evidence exists in their training data. We propose our dataset as a new platform for evaluating evidence-based question answering, as it requires LLMs to generate responses exclusively from retrieved evidence and only when sufficient evidence is available. NeoQA enables controlled evaluation across various evidence scenarios, including cases with missing or misleading details. Our findings indicate that LLMs struggle to distinguish subtle mismatches between questions and evidence, and suffer from short-cut reasoning when key information required to answer a question is missing from the evidence, underscoring key limitations in evidence-based reasoning.

arXiv.org